Wednesday, November 13, 2019

Neural Vision System :: Essays Papers

Neural Vision System Researchers at the University of Houston in Texas have developed a neural vision system that allows a robot to adapt to a changing world. The machine is designed to explore, experience (for better or worse) and then make future decisions based on that experience: typical behavior for any neural device. What is unusual is its ability to "learn new tricks" if the rules it learned through experience no longer apply. In both simulated and hardware experiments, the robot was shown to be able to identify objects correctly, even if the value associated with them changed over time. Neural networks are computing devices based on the way our own brains work. They consist of many, usually simple, processing elements that are wired together in parallel. Unlike conventional computers, which are based on algorithms or rules to be followed in order to produce a result, neural networks act as adaptive filters. They are trained by feeding them inputs and the correct "answers" to those inputs. This information changes the way the network is connected so that the next similar input can produce a similar correct output. One of the issues that neural network designers have been struggling with over the years is how to structure the neural network without prejudging the situations that it is going to encounter. Other methods of creating artificial intelligence, such as building in so-called behaviors or creating expert systems, have the disadvantage of generally requiring some knowledge about the world before they start. In behavioral robots (those that have an automatic, preprogrammed response to stimuli from the outside world), that knowledge can be hard wired, whereas, in the expert system case, the knowledge is contained in the software. Engineers Ramkrishna Prakash and Haluk Ãâ€"gmen wanted, instead, for their robot to be able to learn on the fly the way people do, adapting as circumstances changed. The solution they came up with is the neural-network architecture, called frontal. Basically, the network allows a robot (in this case a robot arm with video cameras for eyes) to identify new objects and decide whether to pick them up, and learn from its previous good and bad decisions. The first part of the system (labeled spatial novelty) is an array of so-called gated dipoles, each of which addresses a different area in the robot's field of view. The gated dipoles basically performs a comparison between the incoming information about that point in space and what it was like previously.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.