Neural networks, genetic algorithms and C++

[b][red]This message was edited by crash_matrix at 2005-1-20 6:12:4[/red][/b][hr]
I've been writing some neural network classes (experimenting with appropriateness of nn in forecasting, to be exact). I can see how using backpropogated learning algorithms on neural networks can help to teach the network to recognize/associate groups of data and even trends (like associating a graphic 'A' with a text 'A' in binary format).
However, I can't quite seem to grasp the concept of applying genetic algorithms to the neural network learning process. Since genetic learning requires finite states (binary or DNA crossover), and input weights are real numbers (floating points), how is it possible to represent a genetic mutation or crossover of the input weights on a neural network?
Are the weights generally a finite set in an array:
float weightArray[10] = {.0, .1, .2, .3, .4, .5, .6, .7, .8, .9};
and the position is used for crossover?

Thanks,
-- CM


Comments

  • : [b][red]This message was edited by crash_matrix at 2005-1-20 6:12:4[/red][/b][hr]
    : I've been writing some neural network classes (experimenting with appropriateness of nn in forecasting, to be exact). I can see how using backpropogated learning algorithms on neural networks can help to teach the network to recognize/associate groups of data and even trends (like associating a graphic 'A' with a text 'A' in binary format).
    : However, I can't quite seem to grasp the concept of applying genetic algorithms to the neural network learning process. Since genetic learning requires finite states (binary or DNA crossover), and input weights are real numbers (floating points), how is it possible to represent a genetic mutation or crossover of the input weights on a neural network?
    : Are the weights generally a finite set in an array:
    : float weightArray[10] = {.0, .1, .2, .3, .4, .5, .6, .7, .8, .9};
    : and the position is used for crossover?
    :
    : Thanks,
    : -- CM
    :
    :
    :

    Although your life story might be intresting, you might want to end it with a clear question related to C/C++ programming, sence that is what this board is about.
  • : : [b][red]This message was edited by crash_matrix at 2005-1-20 6:12:4[/red][/b][hr]
    : : I've been writing some neural network classes (experimenting with appropriateness of nn in forecasting, to be exact). I can see how using backpropogated learning algorithms on neural networks can help to teach the network to recognize/associate groups of data and even trends (like associating a graphic 'A' with a text 'A' in binary format).
    : : However, I can't quite seem to grasp the concept of applying genetic algorithms to the neural network learning process. Since genetic learning requires finite states (binary or DNA crossover), and input weights are real numbers (floating points), how is it possible to represent a genetic mutation or crossover of the input weights on a neural network?
    : : Are the weights generally a finite set in an array:
    : : float weightArray[10] = {.0, .1, .2, .3, .4, .5, .6, .7, .8, .9};
    : : and the position is used for crossover?
    : :
    : : Thanks,
    : : -- CM
    : :
    : :
    : :
    :
    : Although your life story might be intresting, you might want to end it with a clear question related to C/C++ programming, sence that is what this board is about.
    :

    While I do so love reading curt responses to my posts, I suppose I'll submit to the will of the guys who can't read between the lines.
    Were I using a genetic algorithm to teach a neural network class (see, it's a c++ class, so it belongs here), how would I go about quantifying the input weights of the neurons in finite states (since, technically, they are non-discrete real numbers); and, consequently, how would they be mutated (this request is optional).

    -- CM
  • : : : [b][red]This message was edited by crash_matrix at 2005-1-20 6:12:4[/red][/b][hr]
    : : : I've been writing some neural network classes (experimenting with appropriateness of nn in forecasting, to be exact). I can see how using backpropogated learning algorithms on neural networks can help to teach the network to recognize/associate groups of data and even trends (like associating a graphic 'A' with a text 'A' in binary format).
    : : : However, I can't quite seem to grasp the concept of applying genetic algorithms to the neural network learning process. Since genetic learning requires finite states (binary or DNA crossover), and input weights are real numbers (floating points), how is it possible to represent a genetic mutation or crossover of the input weights on a neural network?
    : : : Are the weights generally a finite set in an array:
    : : : float weightArray[10] = {.0, .1, .2, .3, .4, .5, .6, .7, .8, .9};
    : : : and the position is used for crossover?
    : : :
    : : : Thanks,
    : : : -- CM
    : : :
    : : :
    : : :
    : :
    : : Although your life story might be intresting, you might want to end it with a clear question related to C/C++ programming, sence that is what this board is about.
    : :
    :
    : While I do so love reading curt responses to my posts, I suppose I'll submit to the will of the guys who can't read between the lines.
    : Were I using a genetic algorithm to teach a neural network class (see, it's a c++ class, so it belongs here), how would I go about quantifying the input weights of the neurons in finite states (since, technically, they are non-discrete real numbers); and, consequently, how would they be mutated (this request is optional).
    :
    : -- CM
    :

    The average C++ programmer doesn't know / care anything about neurons. Sence your question is so vague, I get the feeling that you purposly are not expecting a reply to it at all...
  • : Were I using a genetic algorithm to teach a neural network class (see, it's a c++ class, so it belongs here), how would I go about quantifying the input weights of the neurons in finite states (since, technically, they are non-discrete real numbers); and, consequently, how would they be mutated (this request is optional).
    :

    Try explaining in terms c/c++ programmers understand -- you have a file of a bunch of numbers that you want to read into an array? That's easy, any 1st year student should know how to do that.
  • : : Were I using a genetic algorithm to teach a neural network class (see, it's a c++ class, so it belongs here), how would I go about quantifying the input weights of the neurons in finite states (since, technically, they are non-discrete real numbers); and, consequently, how would they be mutated (this request is optional).
    : :
    :
    : Try explaining in terms c/c++ programmers understand -- you have a file of a bunch of numbers that you want to read into an array? That's easy, any 1st year student should know how to do that.
    :

    No; I guess this is one of those 'nevermind' questions. The subject I'm asking about has to do with combination of genetic algorithms with neural networks, so it is not for novices.
    Ignore the question if you've never worked with either.

    -- CM
  • maybe you should pose your question on a board that deals with more advanced topics, such as www.forums.devshed.com.
  • : maybe you should pose your question on a board that deals with more advanced topics, such as www.forums.devshed.com.
    :

    Acknowledged. Thx for the tip.

    -- CM
  • "Since genetic learning requires finite states (binary or DNA crossover), and input weights are real numbers (floating points), how is it possible to represent a genetic mutation or crossover of the input weights on a neural network?"

    Candidate solutions in genetic algorithms can be composed of real numbers, for example as arrays. Typically when they are, crossover occurs so that the individual reals are not disturbed:

    A: 1.13 2.43 3.01 4.29
    B: 5.99 6.98 7.98 8.89

    ...crossed over (single-point, between the second and third elements) become:

    C: 1.13 2.43 7.98 8.89
    D: 5.99 6.98 3.01 4.29

    Mutations can be accomplished by simply adding or multiplying individual elements by random amounts:

    A: 1.13 2.43 3.01 4.29

    ...one possible mutation of A is:

    Z: 1.13 0.02 3.01 4.29

    Given that neural networks are most often represented in computers as arrays of reals, the above (or variations) could be applied to train neural networks by GA.


    -Will Dwinnell
    [link=http://matlabdatamining.blogspot.com/]Data Mining in MATLAB[/link]


Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories