All of these projects are made with just one self-written library. It has no dependencies, other than self-written matrix and vector libraries. The drawing is all done by a rather simple self-written div manipulation library, that was improved heavily with this future project. The project began with this video from the Coding Train. I used this project as my entry to learning javascript in general, as I found this whole idea fascinating, and it pushed me to learn more. After about two weeks (almost non-stop), and learning a ton of new math, I had coded out (from scratch) a basic NN, RNN, LSTM, and a basic CNN with all the back propagation code. It was a really fun learning experience.
The little dudes all have a simple 2 dense layer brain. Once they all die from hunger, the two best dudes have their brains mixed together, and become the parents of the next generation. You can click on them to see thier brains in action in the top right corner.
This was a inspired by this project, after seeing it on Coding Train. I already had all the brain code, so strapping a rocket to it couldn't hurt. The rockets have basic rocket physics, as in, they need to fight gravity, and can only turn using side boosters, they can't simply rotate themselves. The rockets can't see the target, they only know thier own speed, x and y position, and how much fuel they have left. Once they all die, the fittest gets cloned and mutated. The fittest's nreain is in the top right corner
This was a much more ambitious version of the eaters above, each worm has a unique DNA, which determines how many eyes they have, what senses they have (like feeling for food with their body), what they can do, e.g walk forward/turn/see/eat, and what their brain is like, how many nodes, layers, activation functions etc. They can get fat from eating more than they can handle, which lets them live longer. Nothing in the classes is hard-coded, e.g. their eyes are just an Appendage() class with a use() function.) The idea was later on I could add arms, legs, wings, etc. But I got distracted with other projects and forgot about this one. perhaps I'll revisit it later and add more features. Again, click to see their brains in action.
This uses a model with 3 dense layers with sigmoid activation functions. It reached about a 70% accuracy after a few thousand trainings. Click the 'load trained model' at the top and draw a number on the right canvas. Click guess and see if the model got it right. You can also download or upload a .json of the model (but its in my format used by the library) and also upload csv files of the data using the same button.
This model uses 2 convolution layers with relu and maxpools, followed by one dense layer. Much more effective than the nn above, as it reaches a 92% accuracy on the testing data. It can probably reach a much higher accuracy, but this training isn't very fast and kind of pointless, so this is the accuracy only after a couple thousand training cycles, not even one epoch. Same controls as above, if you want to try it out.
This model has a single recurrent layer, followed by a dense layer. It can learn one word basically. Type a word in the fist box, click 'use word', then hit start to start training. Training will automatically stop when accuracy hits 100%. Then try typing in the second box, the model's prediction will be in the parentheses
This model now uses a single LSTM layer to achieve learning much longer words and sentences than the model above. While the one above is unable to learn 2 pathways, for example the word 'hello' an l can then be an l, or an o. This lstm layer is way better at that, and it can even learn the entire alphabet, or multiple spaces in a sentence. Amazing for something so simple.
This is a simple test after learning about word2vec and how it works. Using a simple dense layer, creates an encoder function. It takes the sentences in the textarea, and runs them through. I then train it once, and take the weight values out of the middle layer. It creates a 5 dimensional vector, represented with x,y position, and h,s,l for colour (its easier to see similarities than rgb). It creates some interesting patterns pretty fast, like having similar words be on the same axis, or of the same shade.
This uses a similar approach as the MNIST CNN above, but to detect one of 3 images, a car, a star or a fish. It can do more, but requires more training which is slow, and is unnecessary as this is just a simple proof-of-concept test. It achieves pretty good accuracy after 15000 trainings, or 1/2 of an epoch of the training data I used. Same controls as the other drawing models.