Skip navigation

In my last post I described the high-level architecture of my software, how the soccer match part is divided into a client that does the work on showing the graphics and receiving input, a server that actually has all the match info and the AI process that actually acts as a client but controls more players at once. I also said my next task would be to write a small working piece of code for each task to get me started. Well, I’m still there; the main physics part is done. As an exercise to working with the BSD sockets interface in another language than C I did the physics part in Haskell, which was fun. Fun, but not easy.

Basically the difficult part was that I wanted the server to establish connections with many clients, send them some information about the match during connection and then broadcast the match data to all clients simultaneously. For this, of course, you need concurrency and threads, though the fact that Haskell as a purely functional language already threads everything confused me a bit. The answer is Control.Concurrent.forkIO and, for the more complicated stuff (i.e. communication between threads) Control.Concurrent.STM. For tutorials on how to program servers with Haskell, a nice, light introduction can be found at http://metavar.blogspot.com/2007/03/simple-socket-programming.html while a bit more difficult but certainly as interesting article is at http://sequence.complete.org/node/258. For a client, check out http://www.haskell.org/haskellwiki/Roll_your_own_IRC_bot. But I digress. Actually I wanted to write about AI.

So, after finishing my physics server (actually it’s not completely finished, it has no soccer rules yet, but it’s a server with some physics understanding after all), I went on to do the AI part. Since it can get quite complicated, I thought about doing some exercises on AI first. Now, the way how to start designing and programming AI was not really clear for me. I have the book “Programming Game AI by Example” by Mat Buckland, (a very nice book by the way) which has installed the relatively simple scheme of state machines in my head. I was not all sure, however, if a state machine would really be enough for soccer players, so I went on to do some more research.

I then stumbled upon neural networks, a topic about which I had only briefly read in the past – they seemed interesting but I never had the patience to sit down and try to grasp the theory. I finally went through the basics about them on a few sites including Wikipedia, but noticed none of those sites really showed any practical examples. I finally came upon two web sites that really explained how to use neural networks in detail. The first one is actually in German and describes the use of neural networks in a Counter-Strike bot, see http://johannes.lampel.net/bll137.html. It was a very interesting read, but for grasping the details you’d need to dive in the source code. The second site can be found at http://www.ai-junkie.com/ann/evolved/nnt1.html; this one is a very very practical introduction to neural networks, it helped me a lot and it’s also by Mr. Buckland, who keeps surprising me how simple it can be for some to make difficult things seem simple. His tutorial trains the networks with genetic algorithms, for which he also wrote a great tutorial (http://www.ai-junkie.com/ga/intro/gat1.html). The code, like all code from Mat, seems to be written for a less standards compliant compiler such as Visual Studio 6.0 or similar, which means you have to tweak it a bit if you want it to compile on a compiler that follows a C++ standard, but it’s doable.

Encouraged by the theory and these practical tutorials, I went on translating Mat’s Minesweepers into Haskell. During this process I also realized why almost all of the tutorials and the theory of artificial neural networks didn’t show any practical examples; my neural network code itself is merely about 50 lines of active code and contains mostly just the mathematical formulas to the networks. The applicaton of neural networks, however, is nearing 200 lines of code (huge, isn’t it?) and basically describes the whole learning and using process of the brain, err, neural network.

What I learned was that how the neural networks work is basically just a few formulas that don’t really say much about the application; in the end the network is really just a black box that you don’t really know, but it just “magically” gives you the correct values. The difficult part when applying neural networks to a problem is designing the way they’re used and figuring out how to make them learn as well and fast as possible. (I haven’t figured out any of the other ways of getting networks to learn (like simulated annealing and whatnot), i.e. providing positive feedback to them than genetic algorithms, but for now it seems good enough.) I also realized how the whole process was based on randomness, and while I thought that was quite funky, I did have my moments of despair trying to figure out if the thing had actually learnt something or not. (If in doubt, wait.)

Well, in the end it worked, though the minesweepers weren’t rolling around the screen like in the Windows/C++ counterpart, but the path of a minesweeper is drawn into a SVG file instead, as in http://finder.homelinux.org/haskell/Mines-0.1/svg_sample/gen400.svg. In this picture, you can actually see a minesweeper (the line) going for the mines nearby (the dots). Oh, want to take a look at the code? That can be found at http://finder.homelinux.org/haskell/Mines-0.1/.

That being cleared out, the next thing to do would be to actually finally implement a simple soccer AI. And, well, if I actually did it like I originally planned, with the locations of all the players and the ball and the player skills and personality as input, action including the direction and power of the action (let it be running, kicking the ball etc.) as output, it would probably end up being a pretty big neural network. The time to train it would probably also be pretty long, I guess.

The solution I’m trying to come up with includes state machines, neural networks, decising trees and maybe an occasional Markov decision process. (I also need to finally get my hands on the book “Artificial Intelligence: A Modern Approach”, which I’ve managed to miss… until now.) The basic principle is the same one as discussed in my last post – breaking everything into smaller parts, with state machines or decision trees to assist with the splitting, while using neural networks for the smaller, more limited problems. Probably my next post will still be on AI, but when that’s done (or at least the skeleton of it), and the AI players are kicking the ball, the only bigger thing missing is a client that actually shows us the action…

About these ads

3 Comments

  1. In this week I will try to use Linux. It seems interesting for me.
    Regards, Sapteka

    http://www.sapteka.net

  2. Are you planning to release Mines 0.1 on hackage.haskell.org? There’s little neural network work done in Haskell currently, so anything in this area would be welcome.

  3. Your blog is interesting!

    Keep up the good work!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: