Intelligent Machines: How They Learn, Evolve & Make Memories

Machines are slowly taking over. Every day, complicated algorithms serve up targeted advertisements; dictate the search results on Google, and serve to protect our sensitive data from other algorithms trying to steal them.

It would probably surprise most people to learn just how much-advanced computer programs assist and influence our everyday lives. Their increasing efficiency and semi-autonomy make the trillions of daily computations they perform practically invisible around us.

One particularly taken-for-granted form of machine labour is the computational shifts put in to keep our payment details safe and guard against fraud. Artificial intelligence systems are locked in an arms-race with malicious programs. They have to keep adapting (i.e. ‘learning’) to stay ahead of their ‘rivals’. Not only that, but the AI has to know to reconfigure to the tune of the PCI DSS and GDPR requirements — complex security demands from both North America and Europe.

In the global, transnational age, our AI systems are constantly juggling data — and increasingly outpacing and outperforming humans.

But what is artificial intelligence? And how do integrated circuits and electrical currents manage anything at all — never mind learning to manage new tasks?  To understand, we will need to briefly cover a topic that is still unresolved after nearly a century of debate.

Defining intelligence

It is surprisingly hard to define intelligence, and there is no uncontested definition for it. Some AI researchers think intelligence entails a capacity for logic, understanding, planning, self-awareness, creativity, and learning, to name a few. But opponents would argue that these are human-centric viewpoints.

Instead, it might be easier to give intelligence a broad and malleable definition: rather that intelligence is “the ability to accomplish complex goals”. What this means is, there can be broad and narrow forms of intelligence. A calculator, for example, is narrowly intelligent in that it can solve arithmetic much quicker than humans. But it cannot do much else, and therefore an infant could be said to be broadly more intelligent than a calculator.

But ultimately, intelligence is thought to be all about information and computation. This has nothing to do with flesh and blood, and with this definition, there’s no difficulty in recognising that the machines around us, such as our card protection relay systems, have a degree of ‘intelligence’ to them.

Creating artificial intelligence from nothing

If intelligence is just information and computation — then I can hear you ask — what are information and computation?

If there is anything we learned in physics class, it’s that everything in the universe is made up of really small particles. Particles don’t have intelligence on their own. So how can a bunch of dumb particles moving around according to the laws of physics exhibit behaviour that we would call ‘intelligent’?

In order to understand this question, we will have to explore the concept of ‘information’ and memory storage more closely.

The nature of information and memory storage space

Look at a map of the world and what do you see? You see particles arranged in particular patterns and shapes. Particles in the shape of the British Isles, for example, with more particles in the shape of letters that spell ‘Great Britain’ tell your brain that you are looking at a representative of Great Britain on a map.

In other words, the particles — arranged in a particular way — have presented to you information about the world.

A map is a simple memory device as well as an informational device. Like all good memory storage units, a map can encode information in a long-lived state.

Long-lived states are important because they allow us to retrieve information time and time again. Some things in the physical world make terrible memory storage devices. For example, imagine writing your name in the sand on the beach. The sand now contains ‘information’ about who wrote in the sand. But if you returned to the beach a couple of days later, the likelihood would be that your name would no longer be there. In contrast, if you engraved your name on to a gold ring, the information will still be there years later.

The reason sand is bad for storing information — and why a gold engraving is good — is because reshaping gold requires significant energy. Whereas wind and water will effortlessly displace sand granules.

So we can define information as patterns that make sense to us, and memory storage as how good something is at keeping that information in one piece, for retrieval at a later date.

In computers, the simplest memory storage unit has two stable, long-lived states that we call a “bit” (short for ‘binary digit’). That is either a 0 or a 1. And the information it reveals to the observer depends on which of these states it is in.  These two-state systems are easy to manufacture and are embodied in all modern computers, albeit in a variety of different ways.

The art of computing computers

So that’s how a physical object can remember observation. But how can it compute?

A computation is essentially a transformation of one memory state into another. It takes information and transforms it, implementing what mathematicians call a function.

Calculators do this all the time. You input certain information, such as 1 + 1, and press the equals sign, and a function is implemented to give the answer of 2. But most functions are much more complicated than this. For example, the machines that monitor weather patterns use highly complex functions to predict the chances of rain tomorrow.

In order for a memory state to transform information, it must exhibit some complex dynamics, so that its future state depends in a programmable way on the present state. So when we feed the memory state new information, its structure must respond and change and re-order itself into the new informational state. This process by obeying the laws of physics, but in such a way that the laws of physics transforms the memory state in the result that we want it to. (Kind of like how diverting a river does nothing to impact the laws of nature that made the river exist in the first place, but achieves a desirable end result of the diverters.) Once this happens, we have a function.

The simplest function: memory storage and information transformation with a Nand gate

A Nand gate is the simplest kind of function. It is designed to take two bits of information (two binary memory states) and output one bit. It outputs 0 if both inputs are 1 and in all other cases outputs 1.

For example, if we connect two switches in a series with a battery and an electromagnet at the end, then the electromagnet will only be on if the first switch and the second switch are closed (“on”). But if a third switch was to be placed under the electromagnet, such that the magnet will pull it open whenever it is powered on, then we have a Nand gate — a situation where the third switch is only open if the first two are closed.

There we have a complex dynamic, where one state of memory (and by extent information), follows the laws of physics into another state. Today we have many more complex and efficient types of Nand gates, but the premise is the same. Therefore, seemingly “dumb” lifeless particles seem to exhibit behaviours that change and develop from one state of existence to the other.

The final jump: from memory storage to information transformation, to learning.

The ability to learn is arguably the most fascinating aspect of general intelligence. Now that we have explored how a seemingly dumb clump of matter can remember and compute information, it is time to ask: But how can it learn?

With the functions described above, it is human engineers who arranged the physical states in such a way that the laws of physics are ‘tricked’ into transforming information. For matter to learn, it must rearrange itself to get better at completing the desired functions — simply by obeying the laws of physics.

Imagine placing a rubber ball on a memory foam mattress and then lifting it back up again. What would happen? Likely a small impression on the surface, and nothing more. But keep doing it, and eventually, an indentation appears. The ball nestles in the indentation. The point of this simple analogy is that, if you do something often enough, you can create a memory — information about where the ball is often placed. The memory foam “remembers” the ball — information is created about where the ball sits — marked by the indentation.

Machines learn in a manner that is not unlike how the synapses in our brains learn. In fact, the field known as machine learning utilises what are known as artificial neural networks in order to bring about a state of learning.

If you repeatedly put bits in certain states in a network, the network will gradually learn these states and return to them from a nearby state. For example, if you’ve seen each of your family members many times, then memories of what they look like can be triggered by anything related to them.

Artificial neural networks act as feedforward functions, meaning data is only input to flow in one direction. If we think of a neural network as a lot of nodes connected by wires, then the nodes can perform mathematical calculations at certain time steps by averaging together all the inputs they receive from neighbouring nodes.

Do this long enough, and you keep exposing the nodes to the information that each carries. This is very similar to what is known as Hebbian learning and is similar to how synapses in the brain “fire together and wire together”, forming memories and associated memories in the brain. Just by following the laws of physics, after engineering an initial feedforward function, artificial neural networks can ‘learn’ surprisingly complex things. The nodes that get strong input from one another converge, and drop the input from less relevant nodes. Thereby completing an activation function, or entering a new state — a learned process.

Conclusion

Intelligence is the ability to achieve complex goals. Right now, advancements in machine technology are enabling machines to learn and create their own functions and memories, and adapt to new challenges. All that is needed for this type of artificial intelligence is an initial human-engineered spark, and then for the natural progression of the laws of physics to take over.

So there we have it, a way for seemingly dumb, non-conscious, not-alive matter to act spookily a little like our own brains.

As of right now, most AI systems are “narrowly” intelligent. But it is estimated that, at some time near the middle of the century, true “broad” artificial intelligence will come to fruition. (If it’s ever possible, that is still up for debate.)

One thing for certain is: machines will continue to learn, getting faster; better, and more efficient. Of that, we can expect a rising displacement of many jobs and increasing automation, but also the development of more social occupations. The social world is the last bastion of humanity that is yet to be contemplated by the great algorithm revolution.

Website | + posts

Thomas Owens is the copywriter for Dpack. He has a background in science and a master’s degree in journalism.
He lives in Liverpool, UK.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...