HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » Forums & Groups » Topics » Science » Science (Group) » Artificial Brain Blueprin...

Tue Feb 26, 2013, 12:10 PM

Artificial Brain Blueprint Outlined by Scientists: Constructed Synapses May be Key

Artificial intelligence in computers is closer than ever. Using electronic microcomponents that imitation natural nerves, scientists have constructed a memristor that is capable of learning, and may pave the way for an artificial brain.

The findings, which will be published in the beginning of March in the print edition of the Journal of Physics, examined the use of mesristors as components for a larger, artificial brain. Memristors are nanocomponents that are made of fine nanolayers and can be used to connect electric circuits; they're essentially the artificial version of synapses in the brain. Synapses are the bridges that nerve cells (neurons) use in order to contact each other. Their connections increase in strength the more often they're used and are able to process more quickly. Like synapses, memristors learn from earlier impulses. Currently, these impulses come from the circuits that they're connected to and are electrical in nature.

It's not surprising that the scientists involved in this work believe that these artificial synapses can be used to create an artificial brain. The study is the first of its kind to summarize exactly what principles from nature need to be transferred to technological systems in order to make the "brain" function.

Andy Thomas, the lead researcher involved in the study, said in a press release, "They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves."


3 replies, 1107 views

Reply to this thread

Back to top Alert abuse

Always highlight: 10 newest replies | Replies posted after I mark a forum
Replies to this discussion thread
Arrow 3 replies Author Time Post
Reply Artificial Brain Blueprint Outlined by Scientists: Constructed Synapses May be Key (Original post)
Redfairen Feb 2013 OP
formercia Feb 2013 #1
goldent Feb 2013 #2
Hugabear Feb 2013 #3

Response to Redfairen (Original post)

Wed Feb 27, 2013, 04:39 PM

1. "processors that are able to learn by themselves."

The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These form an organizing principle and unifying theme for Asimov's robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres.

The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.


Reply to this post

Back to top Alert abuse Link here Permalink

Response to Redfairen (Original post)

Thu Feb 28, 2013, 12:22 AM

2. "Artificial intelligence in computers is closer than ever."

To anyone involved in artificial intelligence, that statement is hilarious. People have been saying that for the last 50+ years.

Reply to this post

Back to top Alert abuse Link here Permalink

Response to goldent (Reply #2)

Thu Feb 28, 2013, 02:00 AM

3. One of these days I tell ya, we'll put a man on the moon

To think, that scientists actually think we will make any real advancements in artificial intelligence over the next decade or so.

Reply to this post

Back to top Alert abuse Link here Permalink

Reply to this thread