Light speed computing

Computer science’s self-fulfilling prophesy – Moore’s law – has provided a focus and motivation for researchers to innovate for the past 45 years (1) of uninterrupted exponential inflation in performance.  Moore’s law predicts that the number of transistors on a chip will double roughly every two years. Intel’s chips are a great example of this law in action: from the company’s first effort in 1971 – the 4004 – containing 2,300 transistors, through to their ubiquitous 10 million transistor Pentium models of the 1990s, and now their latest innovative processors like 2012’s Core, packing in a staggering 1.4 billion transistors.

However, with manufacturing processes closing in on the atomic level, experts are beginning to raise concerns that computers are reaching their fundamental limit of miniaturisation. For instance, already today electronic barriers in chips that were once thick enough to block current are now so thin that electrons can penetrate them – a phenomenon known as quantum tunnelling. As a result, many researchers are going back to basics, looking at the primary factors that fundamentally affect performance.

Two clear frontrunners have emerged as the next step in computer evolution: quantum and optical. The former’s unprecedented parallelism offers millions of times more power than today’s most advanced supercomputers. In a quantum computer, qubits replace bits as the computer’s alphabet, and atoms, ions, photons or electrons and their respective control devices work together to perform memory and processing tasks. However, despite significant progress towards true quantum computing, numerous hurdles remain and reaching the full potential of quantum machines is still a distant prospect.

Representation of laser-light soliton interactions for potential optical computer applications of the future. Credit: Research Media Ltd.

Representation of laser-light soliton interactions for potential optical computer applications of the future. Credit: Research Media Ltd.

Bending light

Optical computers, on the other hand, are a far less bizarre proposition. In their most simple form, electronic hardware is simply replaced by photonic equivalents: wires carrying current are traded for optical waveguides, and electrons are substituted by photons in transistors. This ‘simple’ change has the potential to make optical computers roughly 10 times faster than their electronic equivalents.

Decades of work has been conducted in this area. For my part, during my PhD studies at the University of Edinburgh under the tutelage of Professor Noel Smyth, I worked on the mathematical modelling of special optical waves in liquid crystals. These waves, named nematicons, could one day form the basis of future devices in optical computers; we concentrated on beam steering for all-optical logic operations. Our models aimed to approximate experiments conducted by a team led by Professor Gaetano Assanto at the University of Rome ‘Roma Tre’ in Italy – a group still making fascinating discoveries in this area.

But despite fantastic progress, huge challenges remain in creating optical components that can compete with electronic devices. Often optical materials, such as liquid crystals, fail in one or more of the key properties, like cost, speed, size, energy, etc., needed to outdo silicon electronics.

New directions

Although dauntingly high, these barriers to optical computing are far from insurmountable, and there are plenty of reasons to be optimistic that optics may offer an alternative to the evermore difficult challenge of keeping pace with Moore’s law. For instance, the field of materials science has been blooming in recent years. With the design and manipulation of materials at the nanoscale opening doors to unique and unsurpassed properties, the perfect material for optical logic may be just round the corner. And looking to revolutionise not only components but computer design itself, UK company Optalsys recently launched a proof-of-concept massively parallel optical processor capable of performing mathematical functions and operations. The team behind the technology predicts it will provide a step change in computing for the big data and computational fluid dynamics applications of the future.

Notes

Gordon E. Moore, Intel co-founder, described this trend for the first time in 1965 in his publication “Cramming more components onto integrated circuits”, Electronics Magazine 19 April 1965. The term “Moore’s law” was coined around 1970 by the Caltech professor Carver Mead in reference to the statement Moore.


bskuse_photoBen Skuse (@ResearchMediaBS) is a Senior Editor for Research Media Ltd. Having completed a PhD in Allied Mathematics at the University of Edinburgh, Ben found he had more skill in talking and writing about science than actually conducting it. Therefore, he began freelance writing for Research Media in 2010. In 2011, Ben decided to commit to a publishing career and undertook an MSc in Science Communication at the University of the West of England. During this time, he continued writing for Research Media and, in 2012, an editorial position became available, so he snapped up the opportunity with both hands. Since then, Ben has risen to Senior Editor, producing the Technology series of flagship publication International Innovation and managing a tight-knit editorial team.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s