Tuesday, October 25, 2011

Collaborative Learning

Collaborative Mind Map

I rarely study for a test.  My excuse is that it's not really studying, but cramming.  After all, shouldn't one learn the material while it is being taught (at their own pace), rather than the night before a score is given?  The truth, however, is that not everyone has the time and interest to always be attentive during the learning process, and thus periodic reviews/tests are needed to prompt us to "study"!  So here we go...

The Experiment: Collaborative Review

Being that it's one of those few times that I actually study for an exam, I thought it would be interesting to try out new approaches.  What if, instead of meeting in a group and discuss possible exam problems, we do so online?  Each person would do a self-review of the materials, and post 5 possible questions and answers to their blog and share them:

What are the benefits of that?  Well, for one, we gain more perspectives on the materials.  Instead of reviewing from our point of view alone, we get to see others' subjective point of views as well (i.e. what they think were important materials and should be on the exam.)  See the mind map above for what we have merged together so far!

Here are my review questions and answers:
1. Why should you indent your code with two spaces instead of using the tab character?
A: We indent code with two spaces to ensure readability without taking up too much space.

2. Give an example of a CheckStyle error
A: Although it is difficult to differentiate between PMD and CheckStyle errors, one can generalize CheckStyle errors to those relating to the style/format of the source code.  For example, CheckStyle would recognize the following as errors:
a) Use of tab characters instead of spaces for indentation
b) Not ending the first sentence in a documentation block with a period
c) Line is longer than 100 characters

3. Give an example of a PMD error
A: Although it is difficult to differentiate between PMD and CheckStyle errors, PMD errors are usually more focused on source code semantics.  For example, PMD would recognize the following as errors:
a) An if() statement is always true or always false
b) A method is empty
c) An implementation type is used instead of the interface type (i.e. ArrayList instead of List)

4. Give an example of a FindBugs error
A: FindBugs checks the compiled bytecode for possible performance and correctness issues.  Possible problems/errors that it reports on are:
a) Declared class data fields that are never used in the class
b) Using Math.round() to round an integer-casted floating point value
c) Using incompatible bit masks in a comparison that always yields the same result

5. Give an example when an IDE could be "bad" for you
A: An IDE can rarely be "bad" for you, but the lack of an IDE, or too much dependence on the IDE, could potentially cause you to mess up a job interview.  For example, the FizzBuzz program is usually used to evaluate a potential interviewee's programming capability.  The question is, can you write a fully compilable version of the code without the help of an IDE?  Or will you need to go through multiple revisions to iron out syntax bugs which the IDE usually flags and correct for you?

Thursday, October 20, 2011

ToyBot is hosted on Google Project

After the competition, we started posting our individual robots onto Google Project for continued development and improvement.  If you would like to contribute, feel free to visit ToyBot's Google Project page.  I also posted my pre-packaged robot there, so feel free to download it and run it against yours.

If you are a developer and would like to contribute, go straight to the "Source" tab and grab a copy of the distribution using subversion.

I am sure there are many areas in which ToyBot can be improved on.  Hop on in and join the fun challenge!

Tuesday, October 11, 2011

Robot Rumble!!!



It is time!  The robocode competition is here, and we are all ready to have fun!  After spending hours going through the API source code, as well as trying some "Java cracking" attempts only to find the API being pretty secure, I settled on using the AdvancedRobot API.  I followed the robocode lessons and created an enemy robot tracker, which allows me to track other robots' movements and make prediction based on existing movement patterns.  The radar tracking code keeps all the robots being tracked in an array, and locks on to the first robot it sees.  The locking is done using the minimum radar turn assuming robot and gun is turned in the opposite direction.  Finally, the motion is a simple circular movement, with the added random reversal of direction whenever a continuous decrease of distance to target is observed.  Also, a "gutter" region around the battlefield is defined to try to keep the robot away from the wall, thus minimizing self-inflicted damage.

The competition is happening now!  In the meantime, here's some performance statistics against sample bots:

In one-on-one battle, the approach used was able to beat all the sample robots consistently.  One of the initial problem I had was with the "gutter" logic.  I initially set it to reverse every time it finds itself in the gutter. This is problematic most of the time because it ends up changing direction constantly and being stuck in the gutter.  This is fixed by putting a timestamp on each reverse, and set a period of timeout for consecutive reversal, thus allowing it enough time to exit the "gutter" condition.

Finally, like most of the students, I struggled the most with understanding the underlying robocode execution/event model.  The game physics page tries to explain the execution model, but is still unclear for in-depth exploitation.  In my attempt to go through the API sources and trace the event model, here is what I have as my notes:


Function calls per turn:
performLoadCommands()
  - fireBullets() => add bullets to battle field
updateBullets() => update and remove
updateRobots()
  - performMove()
    - updateGunHeat()
    - updateGunHeading()
    - updateRadarHeading()
    - updateMovement()
    - checkWallCollision()
    - checkRobotCollision()
    - update scan flag if moved or turned
  - performScan()
handleDeadRobots()
  - compute scores
  - update survival on the remaining bots
computeActiveRobots() => count how many is alive
publishStatuses()
  - energy, x, y, bodyheading, gunheading, radarheading, velocity, remaining moves/turns, gunheat, roundNum, time, etc.
wakeUpRobots()
  - waitWakeUp()
  - waitSleeping()


One needs to look at the event model in a cyclic manner.  During the last step, wakeUpRobots(), that's when your custom code is executed/evaluated.  I think that during your code execution, whenever you call a function that correlates to execution of an action (i.e. execute(), ahead(), turnRight(), turnRadarRight(), etc.) the action is registered and the robot code is put to sleep.  Then the event model starts evaluation from the top, by first updating bullet firing.  Note that in this case, for each turn, the standard Robot API only allows the registering of one of the various actions in the event model.  The AdvancedRobot API, on the other hand, have set*() functions to register multiple actions per turn before being interrupted and put to sleep by the call to execute().  I hope this notes could help the next generation of robocode hackers!  Enjoy and have fun!