Dmitry Shuklin  
Scientific Research & Software Architecture  
 
[Russian] ProductsAIPhilosophy ProfileEvents
 
 
Microsoft BizSpark
Products
Research
Profile
Events
Forum

Partners:
NORDSOFT
   Parking.ru
Media Plexus Inc.
 

THE FURTHER DEVELOPMENT OF SEMANTIC NEURAL NETWORK MODELS

 

Dmitry Shuklin

Kharkov National University of Radio Electronics, Lenina st., 14, Kharkov, Ukraine

Email:shuklin@bk.ru

Keywords: Neural networks, Virtual reality, Semantics

Abstract: The existing neuron networks models are reviewed. A conclusion had been drawn about the necessity of introspection and self-modification abilities in the system. For maintenance of these abilities a concept of pointer to neuron is provided. Pointers represent virtual connections between neurons. In this model, bodies and signals transferring through the neurons connections represent a physical body, and virtual connections between neurons are representing an astral body. It is proposed to create models of artificial neuron networks on the basis of virtual machine supporting the opportunity for paranormal effects.


1 INTRODUCTION

Development of the program systems that exist in virtual reality differs from the development oriented on production existing in physical reality. For software development an obtained product cost is determined as the sum of charges for these resources: 1 - memory necessary for the developed program executing; 2 the developed program productivity; 3 charges for the system development. Existing software development practice shows that the most expensive resources are charges for development. Charges are higher at the first stages of desperate tasks solutions. In this case it is unclear if this task can be resolved. The problem of artificial intellect creation can be considered an unsolvable problem. The problem of intellect has existed since ancient times and the problem of artificial intellect has existed since the middle of the twentieth century. We can surely claim that the problem of artificial intellect will be important in a foreseeable future. So any minor opportunity for overcoming the crisis of the intellect behaviour researches should be welcomed, and any research should not be discarded. All the hypotheses that can help to make one more step on the way of intellect and mind research should be reviewed.

We research a computing system. An artificial intellect system will exist in virtual reality. And that wont be physical reality where human exists. Reality where an artificial intellect will exist can be much more complicated than physical reality or much simpler than one. Structure laws operating in virtual reality couldnt be absolutely equal to natural reality ones. Theres no necessity for us to create an artificial intellect that would be similar to natural intellect. We have rights to create any virtual reality that would have any properties. We can plunge some layers of virtual reality into others infinitely and cross or break bounds separating them [Dudar Z.V., Shuklin D.E., 2000 (1)]. The main thing is the possibility to obtain abstract solutions to be modelled by the computing techniques that exist in our physical reality.

2 Artificial neural networks

When the artificial neural networks are discussed, networks of perception type are often mentioned. In this case a neuron consists of an adder unit, an activation function and synapses that multiply output signals by their weight. Topology is created on the basis of all these neurons quantity and the neurons are grouped into multi-layer structure. The obtained network can be taught using one of the teaching algorithms, for example, by a backward errors expansion algorithm [Wasserman P.D., 1989]. The networks structure is completely configured by the developer. The number of neurons or links isnt changed. Of course that is a simplified description of an artificial neural network of perception type but this notion is enough for further description.

Lets review a neural network idea from the higher abstract level. What is a neural network as an idea? A neuron is a unit that processes incoming information and transmits a result to other units. Functioning synchronization of all neurons is very important. Apparently, complexity of functioning inside a neuron is not an important factor for this context. A neuron can do both elementary and quite difficult operations. And it wont lead to a functional modification of the system in whole. If necessary the group of elementary operations can be transformed into combinations of neurons that will allow working with complicated functions of these abstract combinations. In this case units that have a complicated behaviour can be realized as a group of elementary neurons. On the conditions that the neural network is equal to the Turing machine, that network will be able to calculate any computable function. It should be mentioned that nothing can prevent a developer from using accepted freedom for creating neurons that would have behaviour necessary for a solution to the problem. Apparently a neural network is able to realize any function which was contrived and described by developers. But proving that any function can be realized is just the beginning. For practical use of the developed system it is necessary to realize not any but the certain for a given context functionality.

What else is necessary for the description of an abstract neural network that can execute any computable function? Its an ability of a function to be modified dynamically. If the network structure is defined exactly as for a classical perceptron, then upon being realized once in accordance to requirements the network can adapt to variable conditions within the network structure limits. Of course its possible to use metatransitions methodology [Turchin V.F., 1977] and dynamics of processed impulses, but that wont be a solution for the problem, but an escape from the problem.

Self modification and self analysis needs the presence of dynamics topology. It is necessary for the neural network to be able to process its neurons as data. Then one part of the neural network will be able to change topology of another part of the network. An interesting fact is that John Von Neumann has created a neural network architecture that differs from the perceptron one (we wont review the restrictions determined by lattice and in case you need the details for the self-reproduced automatic machine realization you can see the work [Neumann, J., 1966]). Its properties important for this work are described here:

- A neuron is a simple device for processing of input signals. Conjunction, alternation and invention logic functions were used as in the original work of Von Neumann.

- A neuron is able to change an executed function dynamically. One neuron can have different functions for different times. And just one of available functions can be executed for a period of time that was defined previously.

- A neuron can change its links to other neurons dynamically.

- One part of a neural network is able to analyze the condition of another network part.

- One part of a neural network is able to change the topology of another part of the network.

If we have mathematical full set of functions then their complexity affects calculating effectiveness only. As Von Neumann showed in his work [Neumann, J., 1966] this neural network is equal to the Turing machine. Therefore, only a single level metatransition is necessary [Turchin V.F., 1977] and as a result of this metatransition, the neural network realizes a tape with a program and a final automatic machine that is equal to the Turing machine. Function of the neuron isnt the most important thing for Von Neumann network but self-reflectiveness and self-modification abilities of the neuron are. Lets suppose a structure has already been formulated into the network. One part of a neural network is able to analyze the structure of another part of the network. And then on basis of this analysis it is possible to make a decision and to modify neurons and the links.

One part of the neural network can use another part of the network as a repository for memory by dynamically connecting to different neuron-cells and checking their status and changing the status. Analogy that could be drawn between DNA molecule and a tape into automatic machine is interesting. Furthermore, a part of the neural network can make a constructing branch and using some neurons it can construct a device that would have some more functions. At the same time memory recorded on the tape is used as a DNA and on the basis of the memory a part of network can be constructed.

3 Semantic neural networks

Lets make an overlook of semantic networks [Dudar Z.V., Shuklin D.E., 2000 (2)] and corollaries that appear when Von Neumanns ideas are used in networks. Lets accept virtual reality [Dudar Z.V., Shuklin D.E., 2000 (1)] can be developed and it will be interesting from the users point of view. There are limitations to a link topology for the Von Neumanns network but we accept a case without these limitations. Only logical values can be processed, but we accept that inexact values can be processed too. All neurons into the Von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique we accept neurons can be self-running or synchronized.

In contrast to the Von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by Von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each others identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks [Gaze R.M., 1970].

Theres no description for self-reflectiveness and self-modification abilities into the initial description of semantic networks [Dudar Z.V., Shuklin D.E., 2000 (2)]. Of course we can say semantic network has inherited these abilities from its prototype that is the Von Neumann network. And that is the truth; the idea of self-reflectiveness was inherited. But that is a long way from an idea to its practical realization. So we need to provide a part of a neural network to be analyzed and modified with another part of the network.

Lets take as a rule that inexact data transmitted between the neurons is not enough to provide this purpose. Lets accept a pointer to the neuron exists. This pointer would be an unique number that is used as a neuron identifier in the repository of neurons. Let neurons process not just inexact data but each others pointers too. It is obvious that it can be technically realized.

Now we need to ascertain what the pointer is. A pointer to a neuron is a virtual link that isnt realized as an axon or a dendrite. Let the neurons constructed into virtual reality interact not just by transmission of signals between axons-dendrites but also by paranormal effects. So, a neuron has input and output signals and a set of virtual links to other neurons. Neurons can interact with other neurons processing their pointers, but this neuron doesnt have signal links to them. And the difference between pointers and signal links is also obvious. Signal links are two-sided structure formations that are communicated both with a source and with a receiver of a signal. But a pointer is one-sided. The owner of an identifier to a neuron can initiate an interaction with this neuron. But it is technically impossible to determine if a pointer to a neuron exists unless this pointer retrieval has been done. Links between axons and dendrites can be considered a long-term system memory that is invariable for the context. Signals and pointers processed by neurons can be considered an over operational information that depends on the current context. Signals or pointers loss (here we can draw an analogy with an epileptic seizure) should not influence the made system structure and should not lead to change of the long-term memory or the personality.

Presence of pointers allows using indirect interactions. Analogies from traditional programming languages are double or triple pointer dereferencing. In this case the neuron 1 that has virtual links with the neuron 2 can interact with the neuron 3 just if the neuron 2 has a pointer to the neuron 3. This leads to wide interacting between neurons without direct contact through physical or virtual links.

Neurons analyzing and changing the network structure can be introduced into a network to provide its self-reflectiveness. Lets also introduce there neurons-receptors that would react to structure elements of the developing neural network. Such neurons-receptors can turn to exited state upon fulfilment of some certain conditions describing presence or absence of neurons and links between the neurons that have specific characteristics. We also introduce neurons-effectors that are able to modify neural network structure after being turned to exited state.

We need to provide neurons-receptors and neurons-effectors self-applicability for a system completeness. A neurons-receptor should be able to analyze other neurons-receptors including neurons of the same type as it is. A neuron-effector should be able to modify other neurons-effectors and not only the neurons that perform the signals processing. It is possible because of the pointers to neuron presence which was accepted earlier.

As neurons-receptors we can accept the neurons that determine presence or absence of neurons of the specific types which have links to a neuron; or the neurons that define presence of communications between neurons; or presence or absence of a communication of specific type to a neuron. If requirements are fulfilled the neuron-receptor gets the exited state if not the passive state. As the neurons-effectors we can accept neurons communicating with two other neurons, neurons creating other neurons and effectors deleting neurons or links. If a level of exiting state exceeds a certain barrier the neuron-effector functions.

Into the system ELEX [Shuklin D.E., 2001] an experiment was constructed for a neural network structure synthesis on the basis of an external task using the system tools. For a virtual machine executing a semantic neural network there was realized a support for these types of neurons concerned with their self-applicability:

Neuron linker It communicates two neurons indicated by pointers with a real link.

Copier It copies a virtual link from one neuron to another one.

Neuron replicator It makes a neuron pointed by its first virtual link to be divided. It also keeps a virtual link to the new neuron as the second virtual link.

Neuron devouring another neuron It deletes a neuron pointed by a virtual link.

Neuron devouring a link It deletes a signal link between two neurons pointed by a virtual link.

Neuron keeping a pointer It keeps a pointer to neuron; it is analogous to an ordinary memory cell into the computer memory.

Lets review network reconstruction ability after damage. Obviously, the neuron network structure determines intellect individuality. Death of a neuron means forgetting all the information that conforms to the neuron. It is impossible to reconstruct a neuron that contains information about a pink elephant existence without knowing that the system knows about that elephant. If we accept that presence of a neuron means presence of the knowledge about the elephant then a neuron absence would mean absence of the knowledge. Deleting of a neuron deletes the knowledge. Deleting of a neuron means irreversible memory loss. Consequently, a neural tissue generation according to the one given in the DNA example is impossible. A system should be taught the same information that it had before for reconstruction of the defected neuron.

The semantics of a neuron are not determined by its internal state or its internal complexity. Links from a neuron to its neighbour neurons determine it. And only the neighbour neurons define semantics for a single neuron. Therefore, it doesnt matter in which very point of an area a group of neurons describing the concept of a pink elephant is situated. The main thing is the presence of links that allows defining the concept of pink and the concept of the elephant in contrast to concrete objects that are denotants.

Defective receptions, effectors or other cellular constructions regeneration is possible. If there is no damage in semantic structure concerning other cellular constructions there is a possibility to restore links that was regenerated by the cellular constructions. Even if we fail to restore the exact copy of the cellular construction which existed before regeneration, isomorphism of links allows the newly generated neurons to work with the same functioning load.

Central nervous system injuries can be eliminated using process of the second teaching. If a neuron containing information about a pink elephant is deleted, the system will forget about the elephant. When the information about a pink elephant appears in the area of receptors, a new neuron will be created anywhere inside the network and it will link to neighbour neurons the way in such a way to be able to represent the new conception.

Regeneration of the neural network will take place not in defected areas but in area of the saved tissue. Defected areas should be degenerated and dispersed for exclusion of an epilepsy effect. Later on a number of the generated neurons will increase and after being taught, a healthy tissue will take place of the defected one. Obviously, the network expansion will take place as far as the system is taught the new information.

4 Metamodels

Semantic neural network is an abstract model. In reality it can be controlled by a virtual model. That means that laws, where neurons exist are defined by virtual reality. Is it necessary to provide these laws stability? If we introduce into a virtual machine the ability to break ordinary virtual reality laws only for well founded cases, it would be quite a useful property. The developer as a creator of a new virtual reality technically can break some rules which had been fixed for this reality or change these rules temporarily. If we give a virtual machine ability to analyze a neural network executing at the moment then this machine will be able to change the neural network structure or parameters of signals transferring through links. From the point of view of neural network that temporary violation of rules resembles a non-determined chance or a miracle. Virtual miracles make system developing much easier. In spite of a program block realization by neural network tools (thats possible because of neural network and the Turing machine equivalence) we can realize the same service by tools of the lower level and activate it if needed. This helps to save time of developing and allows for increasing productivity of the system. In the view of neural network, a work that needs a lot of resources can be realized almost instantly because of a virtual miracle.

Presence of virtual links between neurons allows us to divide the network model into submodels. Lets name these submodels as bodies. Then a physical body will consist of bodies of neurons and links between neurons; and the body will exist into spatial continuum. In contrast to a real physical world the number of space dimensions for a virtual world can differ from 3. An informational body will consist of impulses transferring between neurons. An astral body will consist of virtual links between neurons and of general principles of the neural network organization that are not reflected in the logic of single neuron behaviour. Physical, informational and astral bodies models will be made during the virtual machine functioning.

Space gives a base for bodies of neurons and its links to be located. For semantic neural network a space has less than one dimension because of neurons identifiers addressing neurons to the repository does not have ordering relationship. Consequently, neurons cannot be ordered by their position in space into the repository of neurons. Every neuron can contact any other neuron without any restrictions for a distance between them or for topology of links. Lets name this space zero-dimensional.

Space in the Von Neumann neural network that was reviewed earlier is two-dimensional. In case the network is realized on a two-dimensional crystal effective emulation will be practically impossible because of restrictions for topology of links between neurons. In case three-dimensional crystal is used the most part of components should be used not for signals processing but for providing of communications between separate neurons. It will be difficult to realize dynamic commutation of communications in time for a silicon neural calculator. The realization of new neurons creation will be almost impossible. We will need to create idle reserve neurons in advance and to connect them up to the network as necessary. It is possible to realize a neural network on the basis of silicon technology if it is built on a three-dimension crystal where cells have their addresses analogous to IP addresses and routers can establish links according to commands that are generated by cells. Hence, realization of a neural network model on the basis of biological tissue that is able to change its structure in time looks more perspective.

Model of astral body most primarily generates pointers to neurons and transfers these pointers among computing structures. Then some neurons will be able to transfer the pointers to other neurons into their signals. We can also assign to an astral body model such neural network teaching functions as backward errors expansion [Wasserman P.D., 1989], Amosovs EIS (Excitation-Inhibition-System) [Amosov N., 1973], synthesis of synchronized linear tree [Shuklin D.E., 2001] and others. This decision is effective in case there is no necessity to change an organization principle during the life of the neural network. That helps to develop a more universal model of the astral body from the viewpoint of the Turing machine and to provide its interaction with physical and informational bodies.

An astral body model can function in different ways including the realization of virtual miracles. An astral body model is able to create new links between neurons and transform neuroglias into new neurons. Dividing of old neurons is not useful. Every neuron in the semantic neural network corresponds to conception from the material area. Division of a neuron will lead to the conception division and this operation is not equal to teaching of a new conception. If uncontrolled division of neurons is done during the teaching process, most probably there will be an informational chaos in the system. It is more effective to accept existence of a miracle and to transform space (neuroglias) into a neuron at once in the area where it is needed. It is possible to make migration of neurons if necessary but for the zero-dimension space widely used for IBM modelling, the migration of neurons effectiveness is doubtful.

4 Conclusions

A physical body (a neural network) controls system effectors and gives information necessary for modification of the neural network structure to an astral body. Using teaching rules stored in the network structure (the memory) an astral body can change (teach) this neural network including new teaching rules. Consequently, any teaching/synthesis rules of topology for a neural network become a particular case from all of capabilities of this system.

As a result the developed neural network can be realized effectively by existing PC tools. Semantic neural network is an equivalent to the Turing machine because of developer freedom has been accepted. That means that on the basis of the semantic neural network it is possible to realize a system for calculating any function computable by Turing machines. For example, as a particular case such network can model a multilayered perceptron with a backward errors expansion. Neurons of a perceptron can be constructed from separate neurons executing simple functions of summation, multiplication and activation in the network. A teaching algorithm for a perceptron can be realized as a separate fragment of the network that would be able to analyze and modify the fragment conforming to its own perceptron.

 

 

References

Dudar Z.V., Shuklin D.E., 2000. Implementation of neurons for semantic neural nets thats understanding texts in natural language. In Radio-electronika i informatika KhTURE, 2000. No 4. . 89-96.

Wasserman P.D., 1989. Neural computing : theory and practice / New York : Van Nostrand Reinhold, c1989. viii, 230 p. : ill. 24 cm.

Turchin V.F., 1977. The Phenomenon of Science. A cybernetic approach to human evolution. Columbia University Press, New York

Neumann, J., 1966. Theory of self-reproducing automata, edited and completed by Arthur W. Burks. - University of Illinois press, Urbana and London

Dudar Z.V., Shuklin D.E., 2000. Semantic neural net as a formalNeumann, J., 1966 language for texts description and parsing in natural language. In Radioelectronika i informatika KhTURE, 2000. No 3. . 72-76.

Gaze R.M., 1970. The formation of nerve connections, a consideration of neural specificity modulation and comparable phenomena Mill Hill, London, England Academic Press London-New York, 1970.

Shuklin D.E., 2001. Implementations of semantic neural net for expert system thats transforming sense of text in natural language. In Radioelectronika i informatika KhTURE, 2001. No 2. . 61-65.

Amosov N. Kasatkin A. Kasatkina L. Talayev S. 1973. Automatic machines and intelligent behavior. The modeling experiment. Kiev: Naukova dumka, 1973. -261p.

Shuklin D.E., 2001. Structure of semantic neural net parsing text meaning in real time. In Kibernetika i sistemniy analiz. - 2001. - No 2. P. 43-48.

 
 -
-


   -


Copyright (C) 2000-2011 Dmitry Shuklin Shuklin