This project is a simulation of a simple, single layer, neural network called a perceptron. NOTE: The network is flawed. Some combinations of inputs to results will cause the network to loop forever when you train it. I've only been able to train up to about 4 or 5 inputs. The math used to calculate the weights is too simple. I'm just summing up the input values and if the result is over the activation threshold, then the result neuron is on. I need to use a more sophisticated summing algorithm. If you can help, let me know. The idea is to train the network so that the weights of the synapses in the middle (wI_R) will generate the correct numbers on the right based on given input numbers on the left. INSTRUCTIONS: STEP 1: Shift-click Scratch's green Start Flag to launch the application in Turbo Mode. STEP 2: Select an input number to train by turning the three input neurons on the left on or off. STEP 3: Select a desired result number by turning the three result neurons on the right on or off. STEP 4: Click the purple Train button on the top right of the screen. STEP 5: Repeat steps 2 through 4 about three times. What you are doing is telling the network that you want the number on the right to appear whenever you choose the number on the left. To test that the network is trained properly, you may reselect the input numbers on the left and click evaluate. The number on the right should then be set to the original number you chose when you trained the network. To make it easier to visualize, I'm treating the neurons as binary numbers and showing the decimal equivalent (large numbers). At the top of the screen, I've created a map showing you the inputs and their resulting value. When the network is being trained, you will see these numbers change and the green and red circles go back and forth. Once the network is trained successfully, all the circles should be green with the correct result values. If it never stops training, you've run into the flaw I mentioned above. LEGEND: learn rate = This is how much the synapses will be changed during each training cycle. The higher the number, the faster the network may learn, but the more error prone it can become. The smaller the number, the slower it will learn, but with fewer mistakes. input neuron I = A binary, on/off value representing whether each input neuron is activated or not (the big yellow circle means activated). If the input neuron is activated, then the values of the outgoing synapses will be used in the sum of the result neurons. wI_R = This is the weight of the synapse connecting the input neuron to the result neuron. This is the magic of a neural network. The arrangement of these synaptic weights, plus the activation function make it possible to encode many result values given many different input values. RR Sum = This is the sum of the synapse weights for the ACTIVATED input neurons connected to the result neuron. atR = This is the Activation Threshold for the result neuron. The sum of the input weights, RR Sum, must cross the activation threshold for the result neuron to become active. result neuron R = The result (output) neurons that are activated based on the synapse weights. The result neurons are the "meaning". For example, think of the input neurons as detected rays of light and the result neurons translating that light into things we recognized. Pause On Success = During training, if this is turned on, the training routine will pause after it successfully trains one input value to one result. This is useful to see how the network needs to go through many cycles to get the correct weights that satisfy all inputs to results. Each iteration may invalidate the weights for one input/result pair, thus requiring another cycle to try and find weights that work for all input/results pairs.
I strongly recommend to use turbo mode. The original version is written by ifugu. Thanks! https://scratch.mit.edu/projects/2671247/ I converted it for Scratch 1.4 by Retro Converter, and removed cloud variable and username block. https://kurt.herokuapp.com/20to14 It works fine on Pyonkee, too. http://www.softumeya.com/pyonkee/en/index.html