Scientists say chaos theory could keep Moore’s Law alive

Scientists in the US have developed non-linear computer circuits based around chaos theory – the branch of mathematics that deals with systems so complex and sensitive that even tiny changes cause big consequences.

The new approach could lead to more-efficient chips that use less power to operate, and could help us effectively keep Moore’s Law alive.

Moore’s Law basically states that the number of transistors in an integrated circuit doubles approximately every two years. In other words, the complexity – and therefore processing power – of computer chips keeps increasing, thanks to this ever-dependable doubling of transistors.

But the problem with Moore’s Law – coined by Intel co-founder Gordon Moore – is that it’s based on an observation he made way back in the 1960s.

And while – thanks to decades of ongoing technological advancement in the design and manufacture of computer chips – Moore’s Law has basically held up for 50 years, nobody expects it to last indefinitely, as doubling the transistor count means ever thinner and smaller transistors that measure only nanometres in size.

At some point, it becomes less a question of clever engineering, and more the prospect of running into the inescapable laws of physics – even Moore himself thinks so.

“[S]omeday it has to stop,” Moore observed on the 50-year anniversary of his law last year. “No exponential like this goes on forever.”

But thanks to chaos-based circuitry, the spirit – if not technically the transistor count – of Moore’s Law might be able to continue unabated.

“We’re reaching the limits of physics in terms of transistor size, so we need a new way to enhance the performance of microprocessors,” says lead researcher Behnam Kia from North Carolina State University. “We propose utilising chaos theory – the system’s own non-linearity – to enable transistor circuits to be programmed to perform different tasks.”

In their latest project, Kia’s team designed a non-linear chip that can perform multiple functions with fewer transistors than conventional linear circuits.

Whereas a conventional linear transistor design performs just one task per transistor circuit, a non-linear and reconfigurable transistor circuit can contain a number of rich patterns among its circuitry, which can be selectively employed in different ways and at different times.

The result isn’t ‘chaos’ as we often use it in a conversational sense, meaning disorder. Chaos theory is about how dynamic systems are sensitive to starting conditions, which can create new effects within the system – as often symbolised by the butterfly effect.

In terms of a computer circuit, “[we] utilise these dynamics-level behaviours to perform different processing tasks using the same circuit,” says Kia. “As a result we can get more out of less.”

It’s a totally different approach to just shrinking and squeezing in more transistors, as it kind of reimagines what a transistor is in the first place – and what it’s capable of. And it could lead to new kinds of gains that aren’t possible just by increasing transistor counts with ever smaller circuitry.

Related posts

Apple Addressed Two Zero-Day Flaws In Intel-based Macs

Really Simple Security Plugin Flaw Risks 4+ Million WordPress Websites

Glove Stealer Emerges A New Malware Threat For Browsers