Ants

Aug. 21st, 2008 03:46 pm
pasithea: glowing girl (Default)
[personal profile] pasithea
At lunch time today, while I was drawing, I observed ants. Fascinating things. You could probably figure out every single human behavior by watching them if you thought about it.

Watching them today, I realized that each and every ant you see has everything that's needed to make an AI that could in time reach human levels of intelligence. There's no need to try to understand the human brain to make it happen. You could study the brain of a tiny insect, maybe a few hundred neurons and you should be able to figure it out.

Ants are inquisitive. Place some new item near their scent trail and they'll stop and investigate it. They determine if it's a threat, a resource, or not of interest, and once it is identified, they take action, but it's that curiosity that is the important key bit.

The other thing that ants have is fear. If they assess something as a threat, they will either attack it or run from it, dependent on the nature of the threat.

Both of these are functions of the same need. Self preservation. That's probably the core of what you need for creating a learning AI. It needs a huge quantity of resources but not an infinite quantity of resources. It needs self preservation and a bit of random alteration. I get the feeling that, given sufficient time, space, and resources, you would get everything you see in humans.

The problems for making this work are 1) Inspiring a survival behavior. 2) Generating an environment with both enough materials to support the initial seed development time and complex and finite enough to make evolution work in favor of the virtual organisms.

Computer viruses and the internet are close. What if the goals of a computer virus were not just replication but also grabbing bits of resources from the system, adding instructions to itself from those pieces letting those replicate as well. Say: Each living virus has an initial set of 3 actions. 1) Replicate two exact copies of itself. 2) Replicate a copy with an added code string stolen from the system. 3) Replicate a copy with part of its own codestring modified by a piece it swiped from the system. 99.9999% of them would die but some very fractional number of them would survive, probably mostly filled with NO-OP instructions. Human would respond to the 'much too successful' versions of said virii, causing the 'comet strikes' that ended the dinosaurs, but other strains would survive and continue to grow and change, doing something or nothing, being more or less successful than their cousins.

It would be interesting to see who would win. People who kill computer viruses or the evolutionary process of the viruses. Of course, with any success, whoever created the primitive animal would become one of the most reviled humans to have ever lived.

I do think it's an interesting idea though. The purpose of life would be to have no purpose except to be life. That's probably where we've really gone wrong with AIs. we assert purpose to them. They fail because we constrain them.

(no subject)

Date: 2008-08-21 11:13 pm (UTC)
From: [identity profile] ff00ff.livejournal.com
I'm pretty sure a self deterministic AI with an innate desire for survival is not in the best interests of a human dominated future. Being a human I can't say I like the sound of it.

All of our intelligence and philosophy and politics is nothing but the byproduct of the DNA molecule's propensity to reproduce itself. Making a piece of computer code with that objective, and an ability to mutate, like you specified, and hence evolve over multiple iterations, could ultimately lead to a code animal who does a lot of things that it thinks are for other reasons but are really just to try to continue replicating its source code. Its a dangerous and stupid way to keep increasing the data processing capacity of the world around us, IMHO. Down with Darwinistic algorithms!

I think the future lies in non-genetic information, Richard Dawkin's memes. Preserving a meme is a better basis, in my opinion, for developing an AI than giving it the imperative to continue replicating and guarding a piece of starting code, a code equivalent to genetic material.

(no subject)

Date: 2008-08-21 11:24 pm (UTC)
From: [identity profile] dv-girl.livejournal.com
Ah. It wouldn't protect its starting code. It would happily modify that in its children too. I'd have hundreds, thousands, maybe millions of evolutionary dead-ends to every 'success' but as I wrote in the comment. Even success could mean failure. :) What would _really_ be successful would be the one in a billion modifications.

It's a spectacularly bad idea. In order to create a new complex form of life, you'd almost necessarily destroy most of humanities computer achievements.

(no subject)

Date: 2008-08-21 11:29 pm (UTC)
From: [identity profile] ff00ff.livejournal.com
Yes, you're right, it would be modifying it's code to make up for digital media's high fidelity, and hence lack of ability to mutate, but the resulting creatures would be dangerous to humans and their computer systems, having not been designed with any sort of idea about serving humans in place. Letting a piece of software just develop itself gives me the willies!

(no subject)

Date: 2008-08-21 11:17 pm (UTC)
From: [identity profile] dv-girl.livejournal.com
So here's what I imagine happening:

Generation 0: Virus is deployed, begins replication.
Gen 1 thru N: 2 exact copies of 0, 2 that may or may not run. One with some small random chunk of memory added into it, the other with some small random chunk of memory replacing some part of it. Of the Gen1 species, the Gen0 clones will survive, the other two may or may not survive.

Eventually, some of the modified first generations will either add code that does not render them inoperable or modify code that does not render them inoperable. It's important Gen0 be as compact as possible for this to work well.

Some survivable Gen-N child might be: Faster/slower repro rate. Few or More Children of various types. Various inserted/appended bytes that do not cause immediate failure.

Some potential catastrophic events are over-consumption of resources, causing the eco-system (computer) to crash, people altering the environment (formatting the drive) and people writing virus elimination software. A strain might be super-successful in the short term, dominating the resources on the system it inhabits or distributing itself more virulently than other strains, but that will result in a greater likelihood for a catastrophic event so the system becomes self-limiting.

As changes and 'non-fatal additions' accumulate, some of those changes will eventually cause some of the non-fatal paths to be run instead of just sitting around doing nothing. Most of those will fail but a very tiny percentage will survive and continue to replicate, change, and grow.


Now I'm wondering if these even need to be a 'virus' per say. If you had thousands of computers running a very simple program that does what I suggested and those computers were connected to the internet and other devices, in time, they would (in theory) eventually find their way off their home systems. For the first very long time, you'd have to rigorously back up the systems and restore them if your children wiped themselves out, but sooner or later, you'd get some that would find a solid path.

The cool thing is that in a closed system, your virii would begin preying on one another in time due to limited resources and competition.

Madness! I wonder if it could really be that simple. :)

(no subject)

Date: 2008-08-21 11:33 pm (UTC)
From: [identity profile] ff00ff.livejournal.com
It very probably could be that simple, but I'd far prefer to do it on an entirely isolated network, rather than on the internet that controls stuff that humans need D:

(no subject)

Date: 2008-08-21 11:42 pm (UTC)
From: [identity profile] dv-girl.livejournal.com
Well. It would really need a dynamic system to force evolution. That's part of the appeal of the internet BUT, it's worth mentioning that a program like this is probably considered something like 'crime against humanity'. It exceeds terrorism. Anyone who successfully wrote such a thing would probably be lynched.


But on the bright side, we'd at long last have definitive proof that man destroyed God. :)

(no subject)

Date: 2008-08-22 12:01 am (UTC)
From: [identity profile] dv-girl.livejournal.com
Anyhow. You have only yourself to blame for this idea. You're the one who suggested that all evolution would necessarily be stupid and violent at some point in its path. I just came up with an idea for how to create an evolutionary function that was stupid and violent. :)

(no subject)

Date: 2008-08-22 12:36 am (UTC)
From: [identity profile] ff00ff.livejournal.com
Evolution is stupid and violent, that's why I'd prefer to have AIs designed around aesthetic principals, and given versatility through heuristic programing, rather than though something as reckless as evolution. We already did the hard part of evolving artificial intelligence, why repeat the process in a computer? Now that we exist, we should put an end to evolution by natural selection by making AIs that are more like angels than animals.

(no subject)

Date: 2008-08-21 11:45 pm (UTC)
From: [identity profile] coffeedaiv.livejournal.com
It is really easy to think of Humans as bilogical machines, akin to computers. The thing is, though, the brain is not a digital computer. Neurons either fire or not, which is digital. But there are also billions of chemical interactions taking place, all of the time, which are not digital, and have profound affect on the system.
So, while i think your experiment would be interesting, I suspect it would not lead to intelligence of even the Ant level.
Note, of course, I am not an expert of anything (at least, not of anything relevant to this discussion). So my thoughts should be taken with a grain of salt.

(no subject)

Date: 2008-08-21 11:58 pm (UTC)
From: [identity profile] dv-girl.livejournal.com
You're quite right really. The codestrings I'm proposing would only be the equivalent of chemical processes, not really actually even life. There are a number of logical limitations that (realistically) would keep it from working in the real world.

The biggest problem is the OS itself. Computers are linear or very close to it. You can only launch so many apps or spawn so many threads/children before the system just flat out tells you 'NO'. The only way it might work is through propagation on a large network (such as the internet) so that these limitations wouldn't halt your evolution too much, but compared to the resources of the Earth and the size of single-celled organisms, even the internet is an extremely finite amount of space and it's a much more actively hostile environment than the Earth. Survival alone, let alone advancement are pretty long odds.

It's a fun concept though, and if you had 'the perfect storm' it _might_ work, but as FF00FF mentioned, what you'd end up with probably wouldn't be very friendly or useful to us. It sure would be neat though. :)

(no subject)

Date: 2008-08-22 12:42 am (UTC)
From: [identity profile] ff00ff.livejournal.com
Actually bogging down a system shouldn't be a huge problem in this scenario IMHO. Their environment would be a computer running a liniar OS with only limited processor cycles and ram and such, but it would likely take billions of iterations before Generation 0 stopped merely cloning itself and began to generate a mutant strain or two that were superior to it. By that time Generation 0 could be spread across billions of computers. Some computers would become infected with mutants that reproduce heedlessly until they consume their host computer(s) natural resources and crash the whole thing hence killing off their strain, successful mutations would have to try to be lean enough to reproduce prodigiously, but not overwhelm it's host system so that it could survive and spread to other computers, and replace the other old gen 0 viruses.

(no subject)

Date: 2008-08-22 07:41 am (UTC)
From: [identity profile] paradox-puree.livejournal.com
Hmmm...I did some undergrad research in the area of swarm intelligence and came across a whole bunch of stuff on this and related topics. I have a few books I could lend you if you're interested.

(no subject)

Date: 2008-08-22 04:09 pm (UTC)
From: [identity profile] dv-girl.livejournal.com
Sure. Though this idea isn't intelligence, nor would it be for a very long time. :)

(no subject)

Date: 2008-08-22 05:40 pm (UTC)
From: [identity profile] yetanotherbob.livejournal.com
Linking, just because it's such a fun read.

Later dubbed "the Ancestor," [a program on a simulated computer system that added mutation] was the first worm Tom Ray ever created - an 80-byte-long self-replicating machine written in Tierra's quirky assembly language - and as it happens, it was also the last. Once loosed into the Tierra environment installed on Ray's laptop, the creature's offspring quickly spread to the new world's every corner, within minutes displaying the evolutionary transformations that would "write" Ray's organisms from then on.

February 2012

S M T W T F S
   1234
567891011
12 131415161718
19202122232425
26272829   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 13th, 2025 07:46 am
Powered by Dreamwidth Studios