All objects on display reflect the history of computing at CERN.
There are three main themes; storage, CPU and networking.
With arrival of CDC 6600 at CERN in January 1965, there came the first half-inch wide 7-tracks tape units with magnetic tapes at recording densities of 200, 556 and 800 bpi (bytes per inch). By November 1974, there were around 6000 tape reels in the tape library.
Later in March 1972, with CDC 7600 arrival and especially in 1974, 9-track tapes were introduced with densities of 1600 and 6250 bpi and size up to 3600 feet in length.
If the tape was mounted with a write-ring, the user could write on the tape; otherwise it was read-only. The operator would insert the ring according to the mount request unless the tape had a NO RING sticker.
The operations team struggled with frequent errors (around 5%) most likely related to the fact that users could bring in their tapes from outside and worked hard to improve the performance of the whole system (later <1% errors) with managing the tapes inside building 513 (so called "closed shop").
Over the years, the number of tape reels grew to well over 50 000.
In those days, it was commong to exchange physical tapes with data between institutions (for example for passing salary data to Swiss Bank Corporation
(later UBS AG)).
CNL 97 (1995)
This magnetic disk was one of three which interfaced with various Control Data machines. This single platter came from a Control Data 7638 Disk Storage Subsystem and could contain up to 10MB - about the size of a few MP4's on your iPod (see page 5 in particular of http://cds.cern.ch/record/1050339/files/dd-74-35.pdf which provides a little more but rather technical explanation on how we had it set up at CERN).
As with all magnetic disk technology:
- It's an aluminum platter (disk) with magnetized coating (layer of very small magnetic particles)
- It's coated on both sides, so two surfaces contain data
- The "head" (see the 2TB disks as example - still has its arm with head visible) contains an induction coil which is used to read and write, while the platter spins (induction is the process where a moving magnetic field can create a current in a wire)
The head 'floats' a few microns above the surface of the disk - it never comes into direct contact. It can happen that the head can crash and severely scratch the disk - there is an example in the museum downstairs of a head crash where it has scored right through to the underlying aluminium base.
When comparing the 'big disk' to the others on display, it is quickly obvious that the basic design of disks has not changed over the decades. Disk evolution has been characterised by continued reduction in size and increased capacity/density per disk.
These cartridges represent the first step in technologies to automate the reading, writing and retrieval of data. Previous to this, all data had to be retrieved, loaded and dismounted by hand.
IBM 3850 MSS mass storage system (simply known as MSS) was first announced by IBM in late 1974 with data cartridges in form of circular cylinders able to store 50 megabytes of data.
The tape cartridges always worked in pairs so 2 Mass Storage Cartridges = 1 Mass Storage Volume = 100 MB. The cartridges were stored in hexagonal stacks, rather than square, to save space.
The MSS was able to stage out infrequently-used data from disk onto tape as well as bring it back later.
IBM 3850 MSS was introduced at CERN in July 1978 and was in use until July 1989. The driving force was the introduction of CERNET linking the computers on CERN site so users could send data samples to CDC and IBM central computers for storage.
IBM 3850 was primarily used by IBM 370/168 and IBM 3032 mainframes and was seen as extremely reliable right from the start (compared to earlier experience with tape reliability). Towards the end of its life, the data was copied onto 3480 tape cartridges.
Oracle StorageTek T10000T2 cartridge has total capacity of 5 TB. It is actually manufactured by Fuji Film, uses Barium Ferrite (BaFe) particles technology data store, but is also equipped with RFID chip. There is over 1 km of tape inside of the cartridge with 3584 data tracks and it supports over 25000 load/unload cycles. The archival life is estimated to be around 30 years and uncorrected bit error rate is 10-19. CERN however usually migrates data to newer technologies roughly every 5 years in order to keep the footprint under control.
Oracle StorageTek T10000C tape drive uses Oracle StorageTek T10000T2 tape cartridge and can read or write at speeds of up to 250 MB/s. In contrast to a disk drive (which is a random access device) a tape drive is a streaming device. This model takes on average ~50 seconds to locate to a file, but once ready, it can stream the data relatively fast. Currently (2013), we have 40 of such tape drives in production (connected using fibre channel technology) serving over 35000 Oracle T10000 family tapes.
These square-shape cartridges can hold up to 5TB, the equivalent of 1100 DVDs.
The 3390 disks rotated faster than those in the previous model 3380. Faster disk rotation reduced rotational delay (ie. the time required for the correct area of the disk surface to move to the point where data could be read or written). In the 3390's initial models, the average rotational delay was reduced to 7.1 milliseconds from 8.3 milliseconds for the 3380 family.
The data transfer rate -- the speed that data can move to and from the disk surface -- was also increased, from 3.0 megabytes per second for the 3380 family to 4.2 megabytes per second for the 3390.
This drive, which originates from around 1989, would’ve been teamed up with a number of other drives and slotted into a IBM 3390 Direct Access Storage Device (DASD) — a floor-to-ceiling server rack. One IBM 3390 model was capable of storing up to six drives, for a total capacity of 22.7GB. A complete IBM 3390 system had a data transfer rate of 4.2MB/sec, with an average seek time of 12.5 milliseconds. The platters probably span at around 2,500-3,000 RPM.
While it’s hard to put an exact price on a single drive, it would’ve cost somewhere in the region of CHF 50,000 to CHF 100,000 in 1989 — or about twice that, in today’s money. That’s around CHF 50,000 per gigabyte — or one million times more expensive than today’s hard disk drives.
For more information, see:
This model was a disk storage server used in the Data Centre up until 2012.
Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. We have hundreds of these servers mounted on racks in the Data Centre, as can be seen.
We use disks for short-term storage, and as a buffer to hold data while writing it to tape.
This particular object was used up until 2012 in the Data Centre. It slots into one of the Disk Server trays (see the disk server on the other display table).
All about hard disks....
Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes. They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.
At the simplest level, a hard disk is not that different from a cassette tape. Both hard disks and cassette tapes use the same magnetic recording techniques. Hard disks and cassette tapes also share the major benefits of magnetic storage -- the magnetic medium can be easily erased and rewritten, and it will "remember" the magnetic flux patterns stored onto the medium for many years.
The big differences between cassette tapes and hard disks:
- The magnetic recording material on a cassette tape is coated onto a thin plastic strip. In a hard disk, the magnetic recording material is layered onto a high-precision aluminum or glass disk. The hard-disk platter is then polished to mirror-type smoothness.
- With a tape, you have to fast-forward or reverse to get to any particular point on the tape. This can take several minutes with a long tape. On a hard disk, you can move to any point on the surface of the disk almost instantly.
- In a cassette-tape deck, the read/write head touches the tape directly. In a hard disk, the read/write head "flies" over the disk, just a few microns above, never actually touching it. (see image right - why it's so important to keep HDD's in a clean environment!)
- The tape in a cassette-tape deck moves over the head at about 2 inches (about 5.08 cm) per second. A hard-disk platter can spin underneath its head at speeds up to 3,000 inches per second (about 170 mph or 272 kph)!
- The information on a hard disk is stored in extremely small magnetic domains compared to a cassette tape's. The size of these domains is made possible by the precision of the platter and the speed of the medium.
Because of these differences, a modern hard disk is able to store an amazing amount of information in a small space. A hard disk can also access any of its information in a fraction of a second.
There are two ways to measure the performance of a hard disk:
- Data rate - The data rate is the number of bytes per second that the drive can deliver to the CPU. Rates between 5 and 40 megabytes per second are common.
- Seek time - The seek time is the amount of time between when the CPU requests a file and when the first byte of the file is sent to the CPU. Times between 10 and 20 milliseconds are common.
The other important parameter is the capacity of the drive, which is the number of bytes it can hold.
About the drive mechanics:
- The platters - These typically spin at 3,600 or 7,200 rpm when the drive is operating. These platters are manufactured to amazing tolerances and are mirror-smooth (as you can see in this interesting self-portrait of the author... no easy way to avoid that!).
- The arm - This holds the read/write heads and is controlled by the mechanism in the upper-left corner. The arm is able to move the heads from the hub to the edge of the drive. The arm and its movement mechanism are extremely light and fast. The arm on a typical hard-disk drive can move from hub to edge and back up to 50 times per second -- it is an amazing thing to watch!
In order to increase the amount of information the drive can store, most hard disks have multiple platters. This drive has three platters and six read/write heads:
Storing the Data
Data is stored on the surface of a platter in sectors and tracks. Tracks are concentric circles, and sectors are pie-shaped wedges on a track, like this:
A typical track is shown in yellow; a typical sector is shown in blue. A sector contains a fixed number of bytes -- for example, 256 or 512. Either at the drive or the operating system level, sectors are often grouped together into clusters.
The process of low-level formatting a drive establishes the tracks and sectors on the platter. The starting and ending points of each sector are written onto the platter. This process prepares the drive to hold blocks of bytes. High-level formattingthen writes the file-storage structures, like the file-allocation table, into the sectors. This process prepares the drive to hold files.
Homegrown networking technology pre-dating the internet.
This is a CERNnet card developed and built at CERN. There was a lot of space on the card between the components, so the engineers decided to put their portraits on it.
For an extensive history on the beginning of networking at CERN, see Ben Segal's notes.
Networking in the beginning was - chaos. In the same way that the theory of high energy physics interactions was itself in a chaotic state up until the early 1970's, so was the so-called area of "Data Communications" at CERN. The variety of different techniques, media and protocols used was staggering; open warfare existed between many manufacturers' proprietary systems, various home-made systems (including CERN's own "FOCUS" and "CERNET"), and the then rudimentary efforts at defining open or international standards. There were no general purpose Local Area Networks (LANs): each application used its own approach. The only really widespread CERN network at that time was "INDEX": a serial twisted pair system with a central Gandalf circuit switch, connecting some hundreds of "dumb" terminals via RS232 to a selection of accessible computer ports for interactive login.
CERNET, beginning in 1976, offered a fast file transfer service between a number of mainframes and minicomputers via 2Mbit/s serial lines using packet switching in a network of gateway nodes. Remote login (known as "virtual terminal service") was only supported to a single system, the central IBM mainframe. At the end of its ten year life (~1988) CERNET supported 100 systems, including its own version of a LAN bridge, connecting some of CERN's first Ethernets. However, even though architecturally CERNET resembled ARPAnet, all its protocols had been developed independently. It was therefore doomed, though this was of course unknown at the beginning. Even if its designers had been in contact with Vint Cerf and company, there was no efficient way to run a transatlantic collaboration. Imagine a period without electronic mail... but no, this was only introduced at CERN to any extent at the beginning of the 1980's.
10BASE5 Thick Ethernet Cable, 10Mbit/sec.
In the 1980s and early 1990's, Ethernet became more popular and provided a much faster data transmission rate. This cable is one of the first ethernet cables from 1983, a thick, bulky affair. Computers were attached via "Vampire Taps" which were connectors screwed straight through the shielding of the cable.
See more at:
The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.
A modern 2.8TB/s router, the backbone of our internet connectivity. This model was in service at CERN from 2008 until 2012.
These are sample fibre optic cables which are used for networking.
Optical fibers are widely used in network communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead of metal wires because signals travel along them with less loss and are also immune to electromagnetic interference. This is useful for somewhere like CERN where magnets with their highly powerful magnetic fields could pose a problem.
Optical fibers typically include a transparent core surrounded by a transparent cladding material. Light is kept in the core by total internal reflection.
One of the cable installation techniques used at CERN is "blowing" the fibre optic cables down tubes (other techniques involve normal cable pulling). The "blowing" operation uses compressed air, and specifically allows to reduce working time in the delicate areas (eg areas with high radioactivity where radiation exposure needs to be kept to a minimum). For this method, microtubes are blown into the carrying tubes, then the fibre optic cables are blown into the microtubes.
Fibers that support many propagation paths are called multi-mode fibers (MMF), while those that only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter, and are used for short-distance communication links and for applications where high power must be transmitted.
Single-mode fibers are used for long distance communication links. Long distance link is related to the bandwidth itself, eg. for a 1Gbps link, the SMF is used from 10km whereas for 100Gbps, the SMF is needed from 150m. The highest the bandwidth is, the shortest distance a MMF can be used for.
Single-mode fibre have a 9 µm core, and multi-mode fibres have a 50 µm core. Both have a cladding of 125 µm diameter.
For single-mode fibres, with multiplexing, optical fibre bandwidth can now reach 100Gbps over several hundred kilometers. The next coming network interfaces will soon reach 200Gbps and later on 400Gbps.
A plane of magnetic core memory with 64x64 bits (4Kb) as used in a CDC 6600. The very first CDC 6600 was delivered to CERN in 1965 and was the fastest supercomputer of its time.
Magnetic-core memory was the predominant form of random-access computer memory for 20 years (circa 1955–75). It uses tiny magnetic toroids (rings), the cores, through which copper wires were hand-threaded to write and read information.
Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise) and the bit stored in a core is zero or one depending on that core's magnetization direction.
The wires are arranged to allow an individual core to be set to either a "one" or a "zero", and for its magnetization to be changed, by sending appropriate electric current pulses through selected wires. The process of reading the core causes the core to be reset to a "zero", thus erasing it.
The most powerful IBM computer system of its time, the IBM 3090 high-end processor of the IBM 308X computer series incorporated one-million-bit memory chips.
Thermal Conduction Modules to provide the shortest average chip-to-chip communication time of any large general purpose computer. As you can see on the Object, It was water-cooled, whereby hoses/rubber tubes brought water to the 'pins' which rested on the chips. The heated water was then circulated out and away.
The Model 200 (entry-level with two central processors) and Model 400 (with four central processors) IBM 3090 had 64 and 128 megabytes of central storage, respectively. At the time of announcement, the purchase price of a Model 200 was $5 million. A later six-processor IBM 3090 Model 600E, using vector processors, could perform computations up to 14 times faster than the earlier four-processor IBM 3084.
The CERN computer centre has hundreds of racks lke these. They are over a million times more powerful than our first computer in the 1960's.
This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimsed to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.
There are many advantages of using this type of CPU technology. They take up less space and this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.
Multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry.
Multi-core chips also allow higher performance at lower energy. This is a critical factor when running a data centre the size of one like CERN's.