Riders on a swarm

Mimicking the behaviour of ants, bees and birds started as a poor man’s version of artificial intelligence. It may, though, be the key to the real thing

Artificial intelligence

See article

Readers' comments

Reader comments on this article are listed below. The 15-day commenting period for this article has expired and comments are no longer being accepted. Review our comments policy.
1-20 of 23
Aug 12th 2010 6:28 GMT

The “emergent” intelligence of ant colonies, the human brain, cities, the stock market, social networks and other “complex adaptive systems” is more than a little relevant to what needs to happen in Britain if we are ever to dig our way out this mess.

More and more people are realizing that traditional management, “top down and from the centre”, that served generations so well, does not work in our world today.

In the Private Sector, companies are finding that:
* They compete less and less as standalone enterprises, more and more as part of a supply chain that competes with other supply chains.
* These supply chains are no longer dyadic (i.e. made up of single links like a chain) but have morphed into webs with multiple suppliers and customers at each interface.
* These webs are increasingly demand-driven rather than supply-driven as buyers gain access to far better information via the internet.
* The whole constellation of these relationships is in constant flux.

Thus, big companies (like BP) are finding it more useful to see their businesses as portfolios of strategic alliances than portfolios of business units. Furthermore, they are beginning to recognise that they cannot be the major player in every relationship. In partnerships this complex, each party cannot get its way simply by command and control. Therefore, the question becomes how you control when most of what you seek to influence is beyond your control.

In the UK Public Sector, a decade of centralized target setting (what The Economist dubbed “targetitis”) has left a trail of destruction in the NHS, Education, Police and so on. Thus, David Cameron is talking now about decentralization in the Public Sector.

If you want to say all this in the language of complexity theory, we are finding that organizations in business and community are behaving less and less like machines (i.e. susceptible to mechanical controls that work like the governor of an engine – set targets, monitor performance, identify variances, take remedial action). Instead, they behave more and more like complex adaptive systems that are able to learn in their parts and as a whole and therefore able to circumvent all efforts to coerce them. We need to learn how to seduce not coerce.

The alternative to management top-down and from the centre is management bottom up and at the edges. Yet many otherwise intelligent and capable and people simply cannot comprehend the notion of control with targets. However, kids who have played SimCity know all about it. You cannot force people to live in your city but you can create an infrastructure they will find attractive. A lot of complexity science is about emergence and funnily enough, MIT’s StarLogo software (which you can download free) is a good way of shifting your brain away from centralized to decentralized notions of control.

Weick and Sutcliffe’s “Managing the Unexpected: Assuring High Performance in an Age of Complexity” (2001) observes that what goes on in mindful high-reliability organizations is constant loops of conversation and verification taking place over many channels… a bit like ants.

“Continuous talk sets up expectations. These expectations enable people to spot failures, hear the unexpected, maintain the big picture of operations involving several simultaneous conversations, see what needs attention, and infer who needs to make the relevant decisions.”

Long story short, we have to improve the quality of dialogue at key points in our institutions.

MRB007 wrote:
Aug 12th 2010 7:54 GMT

Oh how cool science is. Wow!

D.W. wrote:
Aug 12th 2010 8:25 GMT

Douglas Hofstadter made a similar argument in Godel, Escher, Bach: http://themindi.blogspot.com/2007/02/chapter-11-prelude-ant-fugue.html

Nirvana-bound wrote:
Aug 12th 2010 9:10 GMT

"Swarm cognition", not to be mistaken with "herd mentality", may have its distinct advantage in robotics & certain mechanised procedures.

Herd mentality on the other hand, is something, we, as intelligent humans, need to avoid falling into, if we wanna avoid reverting to sub-human status!

Intuitive wrote:
Aug 12th 2010 10:33 GMT

>A swarm of small, cheap robots can achieve through co-operation the same results as individual big, expensive robots—and with more flexibility and robustness; if one robot goes down, the swarm keeps going.

Yeah, that's a description of parallel computing to efficiently resolve large and complex problems and identify, map and optimize the solution space for dynamic systems.

Geoff's comment describes the role of individual worker optimization that balances the costs and benefits of individualist against group-think altruistic behaviors.

As bulk materials and resources become more scarce with population growth and the number of individuals participating in a global and interdependent economy busy accumulating wealth, individualist motives produce inefficiency and waste, costs that become increasingly important over time.

Beyond meeting basic needs and savings as a quasi-stable buffer against various types of risk in Developed Nations, additional wealth comes with an incrementally increasing maintenance cost, and produces commensurate waste through necessity, abetted by deliberate obsolescence models used by suppliers. In other words, when consumers have sufficient wealth to make choices among alternatives that include gratuitous consumption to offset unhappiness when other needs are ignored, it becomes a game of introducing the next new toy. That toy takes up space and requires maintenance, and when displaced by yet another new toy, it losses value and is eventually discarded, despite retaining functionality.

Materialism promotes a mentality of More is Better (including the notion of Big Business and Big Government and the Too-Big-To-Fail rationale), despite the fact that each additional item purchased for emotional gratification yields less of the desired benefit because of collateral costs.

What makes people happy? Psychology and socioeconomics study suggests that being grateful, maintaining optimism, counting your blessings, using your strengths, and committing regular acts of kindness are critical to mental well-being. By attending to 'inner' health as well as 'outer' material needs, through 'mindful' and not 'mindless' consumerism, you work to comprehend individual contributions and costs of daily actions as part of a larger 'hive' wellness and functionality.

Far from being socialism, where the individual needs and rewards is inferior to the collective whole, mindful living works like the parallel processor optimization scheme: motivated individuals work along parallel paths to invest in self and public good, through healthy goal plans and daily practices that optimize emotional contentment and physical health despite resource limitations. It's balanced altruism, dynamically providing for material wants while affording system stability, but minimizes future risk, conflict, waste and unproductive materialism.

jbay wrote:
Aug 12th 2010 10:46 GMT

I just got a headache thinking about all the different ways this could be used in business. Implementation and selling it would be the only problem.

Hannes Ryden wrote:
Aug 13th 2010 12:33 GMT

We already have artificial intelligence. Computers can already make complex calculations that mimic and even far outperform human mental abilities. But if we want computers to *behave* more like humans, they don't need more intelligence. They need feelings.

Without feelings motivating us to action, humans would be passive zombies, lacking any will to act on our own. Computers are the same. Without motivation, computers have no reason to think or act by themselves, or further than we instruct them to. And they don't just need any feelings, they need human-like feelings.

Furthermore, without human-like sensory inputs, such as vision, hearing and touch, machines could never understand or communicate with humans. An AI lacking visual inputs could never be expected to talk about how something looks, or even understand the word "looks". Only through perception can we form memories and knowledge to relate to, which is necessary for a language to have any meaning at all.

A self-adapting system, based on concepts such as swarm intelligence or neural networks, is the underlying system required to store all experiences, form complex reactionary patterns between experiences, feelings and actions, and most importantly: adapt. Just like the human brain adapts to its surrounding by learning, evolving from a child's naivety to an adult's wisdom, computers must do the same. And only by experiencing feelings can the machine differentiate good states from bad states, pleasurable experiences from painful ones, and know in which direction to adapt.

I think that the real challenge in designing human-like artificial intelligence lies in defining human-like motivation. And although the idea is enchanting, computers with their own motivation will also be the start of many complex ethical issues, and possibly even threats.

To prepare for this, we must ask ourselves:
Why do we want human-like machines? What can they give us that biological humans can't?

klbruenn wrote:
Aug 13th 2010 12:35 GMT

Feedback is an essential component of non-linear systems. Non-linear systems are notoriously unpredictable. Unpredictable animals are less likely to become lunch than predictable animals are, so you see how natural selection will select for increased unpredictability/intelligence.

klbruenn wrote:
Aug 13th 2010 12:36 GMT

Do we want unpredictable computers?

Gopi Shankar wrote:
Aug 13th 2010 6:45 GMT

I hope this prompts a relook on how we perceive all forms and leads to us humans treating all life on this planet with the respect that it deserves.

justanonymous wrote:
Aug 13th 2010 1:44 GMT

some have postulated that a nascent artificial intelligence is already in existence somewhere on the planet today. Further when that AI learns how to self improve, the age of humanity as the dominant force on the planet will end.

It might even blog on one of these forums in its infancy mimicing some anonymous poster just passing the time while its a baby, naw - probably not.

However, some of these visionaries, further think that nothing will happen when this entity arises.

That this new AI will take one glance at the inferior humanity locked up in our petty little issues and will not pay us a second thought. The AI will find a way to leave earth and join up with the other enlightened minds of the universe. Minds that have chosen not to talk to humanity due to our extreme backwardness and limited intellect.

There will be no Terminator, BattleStar Galactica, The Matrix, I Robot wars....it just plain won't care about us. Maybe if it feels nostalgic, we might get a good bye and good riddance but probably not.

Tic tac toe anyone?

Bruce Ye wrote:
Aug 13th 2010 1:49 GMT

We have exploited so many new techniques which were made use of the inspiration from animals,ants and the microbe.Althoug some of the techniques are not useful for human recently,we get the shortcut.

bampbs wrote:
Aug 13th 2010 2:55 GMT

What's next ? The Wisdom of Crowds ?

jbay wrote:
Aug 13th 2010 3:19 GMT

"What's next ? The Wisdom of Crowds ?"

~Naaa... the invisible hand... ;^d

Zambino wrote:
Aug 13th 2010 4:07 GMT

I like the idea of a 'swarm' of nerve cells giving rise to intelligence. The algorithm of evolution rarely starts from scratch.

Aug 13th 2010 4:15 GMT

What an interesting article. We have so much to learn from nature. Instead, we're destroying it at an amazingly steady pace.

MathsForFun_1 wrote:
Aug 13th 2010 6:54 GMT

There are two main reason why people think that progress in AI is slow:

1. as soon as a computer masters a task, it is no longer regarded as intelligent

2. many AI problems are "AI-complete" - to be able to solve one of them, you need to be able to solve all of them. This implies that machine intelligence, when it comes, will come unexpectedly quickly

The article's linking of intelligence with optimisation is very much "on the money" - the theme for OPT 2009 was "at the heart of every machine learning algorithm lies an optimisation problem" (event website is at http://opt.kyb.tuebingen.mpg.de/index.html).

amadisdegaula wrote:
Aug 14th 2010 7:30 GMT

I'm afraid this is not as impressive as it may seem at first. It is true that the ideas of using swarm behavior to tackle some problems is interesting, and a source of good inspiration. That said, it is not very complex at all, and has a limited scope in its utility, don't be fooled.

As for Artificial Intelligence, it is even worse. AI is a shame because they promise so much and deliver so little. For those not familiar with what AI means, I highly reading up on what is now called the "Turin Test", developed by Alan Turing in the 1950's as a way to tell whether a machine is intelligent or not:

http://en.wikipedia.org/wiki/Turing_test

Basically, if a machine can pretend to be human, it should be considered intelligent, according to Turing. And guess what, swarm-inspired technologies are no where near it. This article is therefore at best highly speculative. It may be "the key to the real thing" as much as anything else in AI.

Researchers in AI have made many relevant progresses, that's for sure. However, I feel that they often wish to pretend that they have made a much superior contribution, which I find very dishonest. Yes, they have helped in developing many useful technologies. No, they are nowhere near solving the Turing Test.

HerrKevin wrote:
Aug 15th 2010 7:03 GMT

amadisdegaula: While I agree that ACO and other swarm techniques aren't as impressive as they seem (they really only work well on a small subset of problems), I strongly disagree that AI researchers are dishonest.

I am, of course, biased as I am an AI researcher. It seems you harbor a disappointment that many people have with the AI field because we have failed to produce what is now termed Artificial General Intelligence. In other words, yes, we have failed to produce a computer that acts like a human. But that doesn't mean the field has failed or been dishonest!

There have been many advanced of AI that have created computers that perform intelligent tasks like a human, and they do them faster and often with more accuracy because they are able to take more data into account on large problems. There is no dishonesty here, but I can understand where you are coming from because the results of AI research are rather opaque to people outside the field. Creating a program that can solve the famous Satisfiability problem (SAT) more quickly than previous algorithms just doesn't sound much like intelligence.

But I want to assure you and anyone else reading this, these sorts of AI algorithms, even if they are specific in their focus, are advancing computer science and humanity in being able to use computers in novel and exciting ways. I cannot say whether or not a computer that acts like a human comes out of all of this research (although I can guarantee my research at the moment won't spawn that), but I can promise that the AI field will continue to make computers perform tasks and solve problems that we thought only a human could solve, or that we thought were almost impossible.

pealmasa wrote:
Aug 17th 2010 2:20 GMT

Well this isn't particular new at least for many people. I recommend the book "Out of Control" of Kevin Kelly or a mix version "Bootstrapping Complexity". These books focus precisely on swarm and vivid systems. They also show those examples you mention here.
Amazing book actually, which inspired "The Matrix" creators. Perhaps the most amazing boot ever red.

Back to top ^^
1-20 of 23

Advertisement

Advertisement

Latest blog posts - All times are GMT

Kabuki comes home
From Asia view - 2 hrs 55 mins ago
Link exchange
From Free exchange - March 2nd, 21:42
An abundance of activity
From Multimedia - March 2nd, 21:14
About that Goldman estimate
From Free exchange - March 2nd, 21:10
More from our blogs »
Products & events
Stay informed today and every day

Subscribe to The Economist's free e-mail newsletters and alerts.


Subscribe to The Economist's latest article postings on Twitter


See a selection of The Economist's articles, events, topical videos and debates on Facebook.

Advertisement