One of the projects I found interesting for DOOM 1993 is ViZDOOM. (http://vizdoom.cs.put.edu.pl/)It's a platform plugin for DOOM that allows people to create deep learning computer intelligence. There is even a ViZDOOM tournament by the same name where people send the bot they programmed and trained to fight each other in death match to see who built and trained the best bot.Now with Ikemen we have a very powerful engine that can run characters created for Mugen. I wonder what it would be like if someone created a plugin or script or something where people could do the same for Ikemen that they do for ViZDOOM.You'd essentially then program a deep learning bot into being and then train it match after match. Then there could be a competition where people would send the bots against each other.Currently, computer intelligence is script based. But, deep learning could allow for more naturally evolving intelligence that could compete in ways that are less predictable and more fluid, given proper programming and enough quality training.Some agreed upon characters would have to be chosen so that it's a fair fight for everyone. But, I could imagine a lot of fun potential here.The plugin or script or whatever would need to give access to the computer intelligence to read some key elements of the game. It will need to be able to see its own and its opponents health bars, special bars, the round timer, hit boxes, hurt boxes, collision boxes, likely the floor and walls, and maybe match outcomes. Win state would be set as winning the round as well as trying to do so more efficiently. It could thus look at what it did and what methods got it to the win state quickest in relation to the round timer. It would need access, obviously, to all input controls as well (movement, A, B, C, X, Y, Z, and maybe Start.)The program would start out knowing nothing, and they build off of each experience it tried as it learns to try random button inputs and figures out what works against opponents. It might be wise to run the first many, many matches against current bots to train against until its ready to go against humans. Or, it could be interesting to run it against humans for the entire training. One idea that might be interesting is if there is some way to allow people to connect to it online and play against it for an endless stream of human opponents to train against. But, that would require setting up with the same Ikemen set up and people being willing to fight the deep learning machine. That said, it would not very for interesting to fight against it for the first several thousands of matches, I would think. So, getting humans to fight it online for the start may yield few results.Maybe possible positive outcomes. It might be possible once the bot is trained well enough to create bot intelligent packs to replace the standard scripting with. This could allow for more dynamic single player matches. So, maybe strong computers could train the bots until they're ready, and maybe there is some way to output the evolution tree to run in a non-learning mode with what it did learn on lower spec computers.This could be a ground breaking idea. So, to me, a lot of the old school fighters I played as a kid in the 90's got to the point where I could defeat the computer because I could predict its scripting. You can learn how these bots react to stimuli and how to use that to goat them into positioning into bad spots. Some scripting took this into account and scripted deeper to give the bots a way to handle some of this. But, I realized if I pushed forward to break its root, it usually didn't have any scripted responses for countering two or three moves in. I always thought, "Why doesn't it just try something new instead of just doing the same predictable thing every single time I throw a fire ball at it?"Difficulty can be increased through giving the computer more health / "shield" and or the player less health / "shield." But, that's not making the computer any better at playing the game. That's only making the computer more difficult by stacking the deck in its favor. The same is true for creating boss characters that have a special move for every occasion. They are better suited to counter any type of attack, which makes them more difficult, but they are not learning how to play the game any better. Again, they're just getting the deck stacked more in their favor. Another thing is adding randomness. Perhaps the characters has several different sets of combos and such to link together. Several "roots" that would need to be broken. If one root fails to produce results in defeating its opponent or defending itself, it randomly switched to another set of roots. While this varies combat more, and does give the computer more options in battle, it's still not really learning how to play the game better. It's playing better based on randomly picking a different technique. But, it's not learning why it should pick one root over another. It's just lucking out that switching up throws the opponent off, unless the opponent has learned all the techniques it has; and then it makes little difference for when it switches.A true deep learning machine should eventually start to build its own roots of combos and tricks to try against its opponents. It should also start to learn when to switch between different roots based on elements such as how time it has left in the round, how much health it has left, how much super bar it has, how much health its opponent has left, and how much super its opponent has. It might learn, for example, how to build five different defensive roots and to try to pick one of those when it's low on health. Or, as another example, it might learn that if it has a lot of health and the timer is low to keep using zoning moves like projectiles to run down the clock to achieve win condition.If its opponent keeps throwing the same moves at it, it should eventually learn how to try new things until it develops a root system of commands to use to get over the hurdle. For instance, maybe after getting hit in the face with a fire ball for the 1004th time it might realize it should try jumping towards the opponent and then kick. Once it realizes that this both spared it the health loss, while also lowering the opponents health, it could then bookmark that element of what just happened as a root (set of actions) that are good for handling the situation of a fire ball. If this root is countered enough, it could try to develop a different root, maybe jump in place and land in a punch. Maybe then try randomizing between the two roots to see which one yields the statistically best results over time. Maybe it learns that it's most optimal for it to jump towards the opponent and kick 25% of the time but do a standing jump and fall down into a punch 75% of the time. These are just thoughts and examples.This would provide players with more interesting fight in single player. I'm decent at fighting games. I'm no expert, but I'm decent. What I have noticed over the years is that most fighting games don't have very smart bots. Rather, they just ramp up how many multiple hit combos they're allowed to throw against you again and again on higher difficulties. This would be an opportunity to develop a fighting game intelligence that could grow organically to actually learn how to play better, rather thank simply be given a deck that's stacked higher in its favor.Concerns. The person who wrote the Super Mario World deep learning bot noted that this type of deep learning performed poorly on his test at an old fighting game. Fighting games may be too complex for this type of deep learning to pick up. I am thinking that if this type of deep learning was allowed to train long enough, it would eventually pick up the game.Here are some links to show you what deep learning looks like and with very good explanations for how it works. The first is the best for understanding what I'm talking about.The other option is to have the computer learn from other people playing and try to mimic their play-styles.Here is what ViZDOOM looks like.