Wednesday, December 14, 2011

Hale Aloha CLI version 2.0

Team project page: http://code.google.com/p/hale-aloha-cli-tiger/

It was not long ago since our review of "tiger" team's code base when we were invited to join the team and add some new enhancements:

set-baseline [tower | lounge] [date]
This command defines [date] as the "baseline" day for [tower | lounge].  [date] is an optional argument in YYYY-MM-DD format and defaults to yesterday.  When this command is executed, the system should obtain and save the amount of energy used during each of the 24 hours of that day for the given tower or lounge.  These 24 values define the baseline power for that tower or lounge for that one hour time interval.  For example, if lounge Ilima-A used 100 kWh of energy during the hour 6am-7am, then the baseline power during the interval 6am - 7am for Ilima-A is 100 kW.

monitor-power [tower | lounge] [interval]
This command prints out a timestamp and the current power for [tower | lounge] every [interval] seconds.  [interval] is an optional integer greater than 0 and defaults to 10 seconds. Entering any character (such as a carriage return) stops this monitoring process and returns the user to the command loop.

monitor-goal [tower | lounge] goal [interval]
This command prints out a timestamp, the current power being consumed by the [tower | lounge], and whether or not the lounge is meeting its power conservation goal.   [goal] is an integer between 1 and 99.  It defines the percentage reduction from the baseline for this [tower | lounge] at this point in time.  [interval] is an integer greater than 0, and defaults to 10 seconds.
For example, assume the user has previously defined the baseline power for  Ilima-A as 100 kW for the time interval between 6am and 7am, and the current time is 6:30am.   If the goal is set as 5, then Ilima-A's current power must be 5% less than its baseline in order to make the goal.  At the current time, that means that Ilima-A should be using less than 95 kW of power in order to make its goal.
It is an error if the monitor-goal command is invoked without a prior set-baseline command for that [tower | lounge].  Entering any character (such as a carriage return) stops this monitoring process and returns the user to the command loop.

From these specifications, we identified two technical areas which needed to be implemented to support these commands:
1. Persistent storage and retrieval of baseline data for set-baseline and monitor-goal
2. Timer-based monitor loop for both monitor-power and monitor-goal
I decided to tackle #1, while Ted decided to tackle #2.  Ted implemented the monitor loop with a Timer.  Our design for persistent storage across command executions is to store the baseline data in XML files, each named in the pattern "[tower | lounge].xml".  To support modular design, I decided to create a separate class to hold the baseline data as well as the XML file storage, parsing, and retrieval.  See the file, Baseline.java, for implementation detail.  The XML file handling codes were based off of this tutorial: http://totheriver.com/learn/xml/xmltutorial.html

Another technical challenge we encountered was the need to simulate user interactions in automated JUnit tests.  We accomplished this by using connected pipes (PipedInputStream and PipedOutputStream) and System.setIn() to re-route the System.in stream.  The test codes is then set to execute on a separate thread by wrapping it in a FutureTask object.  Interactive user interactions are then simulated with timed writes to the pipe.  See TestMonitorPower.java for a sample implementation.  This approach was suggested in this message thread: http://www.codeguru.com/forum/showthread.php?t=453798

With these new testing methods, we were able to increase test code coverage from the initial ~30% to 70% coverage.  Yet, the possibility for automated code testing that it open is now quite endless!

Friday, December 2, 2011

Hale Aloha CLI Technical Review for the Tiger team



Today, we are conducting a peer technical review, taking on three different perspectives:
User's perspective: Does the system accomplish a useful task?
Installer's perspective: Can an external user successfully install and use the system?
Developer's perspective: Can an external developer successfully understand and enhance the system?

First, it should be understood that all software projects are never completed.  Rather, they are continuously being enhanced and enriched.  Thus, the issues pointed out here are relevant only up to revision 67 of the project.

User's Perspective:
$ java -jar build/jar/hale-aloha-cli-tiger.jar
Successfully connected to the Hale Aloha Wattdepot Server
> current-power Ilima-A
Current power for Ilima-A as of 2011-12-02 06:49:59 is 3.91 kilowatts.
> daily-energy Mokihana 2011-11-05
Date must be before today.
> daily-energy Mokihana 2011-12-01
2011-12-01T00:00:00.000-10:00
2011-12-01T23:59:59.999-10:00
Mokihana's energy consumption for 2011-12-01 was: 659 kWh.
> daily-energy Mokihana 2011-11-01
Exception in thread "main" org.wattdepot.client.BadXmlException: 400: Range extends beyond sensor data, startTime 2011-11-01T00:00:00.000-10:00, endTime 2011-11-01T23:59:59.999-10:00: Request: GET http://server.wattdepot.org:8190/wattdepot/sources/Mokihana/energy/?startTime=2011-11-01T00:00:00.000-10:00&endTime=2011-11-01T23:59:59.999-10:00
at org.wattdepot.client.WattDepotClient.getEnergy(WattDepotClient.java:762)
at org.wattdepot.client.WattDepotClient.getEnergyValue(WattDepotClient.java:810)
at org.wattdepot.client.WattDepotClient.getEnergyConsumed(WattDepotClient.java:857)
at edu.hawaii.halealohacli.command.DailyEnergy.run(DailyEnergy.java:85)
at edu.hawaii.halealohacli.processor.CommandProcessor.chooseModule(CommandProcessor.java:45)
at edu.hawaii.halealohacli.processor.CommandProcessor.run(CommandProcessor.java:71)
at edu.hawaii.halealohacli.Main.main(Main.java:38)
> energy-since Lehua-E 2011-11-01
org.wattdepot.client.BadXmlException: 400: Range extends beyond sensor data, startTime 2011-11-01T00:00:00.000-10:00, endTime 2011-12-02T06:57:14.561-10:00: Request: GET http://server.wattdepot.org:8190/wattdepot/sources/Lehua-E/energy/?startTime=2011-11-01T00:00:00-10:00&endTime=2011-12-02T06:57:14.561-10:00
at org.wattdepot.client.WattDepotClient.getEnergy(WattDepotClient.java:762)
at org.wattdepot.client.WattDepotClient.getEnergyValue(WattDepotClient.java:810)
at org.wattdepot.client.WattDepotClient.getEnergyConsumed(WattDepotClient.java:857)
at edu.hawaii.halealohacli.command.EnergySince.getEnergyConsumed(EnergySince.java:100)
at edu.hawaii.halealohacli.command.EnergySince.run(EnergySince.java:72)
at edu.hawaii.halealohacli.processor.CommandProcessor.chooseModule(CommandProcessor.java:49)
at edu.hawaii.halealohacli.processor.CommandProcessor.run(CommandProcessor.java:71)
at edu.hawaii.halealohacli.Main.main(Main.java:38)
Total energy consumption by Lehua-E from 2011-11-01 00:00:00 to 2011-12-02 06:57:14 is: 0.0 kWh.
> energy-since Lehua-E 2011-12-01
Total energy consumption by Lehua-E from 2011-12-01 00:00:00 to 2011-12-02 06:57:14 is: 158.4 kWh.
> rank-towers 2011-11-01 2011-11-09
Date must be before today.
> rank-towers 2011-11-01 2011-12-01
Exception in thread "main" org.wattdepot.client.BadXmlException: 400: Range extends beyond sensor data, startTime 2011-11-01T00:00:00.000-10:00, endTime 2011-12-01T23:59:59.999-10:00: Request: GET http://server.wattdepot.org:8190/wattdepot/sources/Ilima/energy/?startTime=2011-11-01T00:00:00.000-10:00&endTime=2011-12-01T23:59:59.999-10:00
at org.wattdepot.client.WattDepotClient.getEnergy(WattDepotClient.java:762)
at org.wattdepot.client.WattDepotClient.getEnergyValue(WattDepotClient.java:810)
at org.wattdepot.client.WattDepotClient.getEnergyConsumed(WattDepotClient.java:857)
at edu.hawaii.halealohacli.command.RankTowers.run(RankTowers.java:106)
at edu.hawaii.halealohacli.processor.CommandProcessor.chooseModule(CommandProcessor.java:53)
at edu.hawaii.halealohacli.processor.CommandProcessor.run(CommandProcessor.java:71)
at edu.hawaii.halealohacli.Main.main(Main.java:38)
> rank-towers 2011-11-23 2011-12-01
For the interval 2011-11-23 to 2011-12-01, energy consumption by tower was:
Mokihana 5289 kWh
Lehua 5290 kWh
Ilima 5426 kWh
Lokelani 6219 kWh
> quit
From the user's point of view (see sample executions above) the program could use a bit of polishing on its command handling and error reporting.  For example, the command "current-power" works correctly, while the other three commands, "daily-energy", "energy-since", and "rank-towers", suffers from date handling problems.  Computational-wise, they worked correctly.  However, the date handler code suffers from a "single-month view" problem for date comparison, and couldn't handle out-of-range date gracefully.

Installer's Perspective:
From the installer's perspective, the project site has the basic description and user's guide needed to successfully install the system.  However, the user's guide could include more usage description rather than depending on the user to discover the "help" command within the program's execution.  The download page contains a copy of the project for distribution, with the correct version number and release date.  The distribution contains a ready-made "hale-aloha-cli-tiger.jar" file that can directly be used without a compile and build system.  From the installer's perspective, the system was well-described and packaged.  Again, based on the sample executions given above, the system can be improved in its date and error handling.

Developer's Perspective:
From the developer's perspective, the most useful starting point is the Developer's Guide.  The guide fully covers the command-line based approach to automated testing and build, and provide a succinct set of guidelines with regard to new improvement development.  It does not, however, enforce any project management process.  It does, however, covers JavaDoc and Continuous Integration using Jenkins CI server.

Next, we take a look at the source code.  From the source code, the JavaDoc documentation is generated, from which the system's design can be examined.  The documentation is well written, and the system was designed well to support information hiding.  However, we also notice that this particular design does not implement a CommandManager class, as suggested in the design specification.

With the Ant build system, it is easy to run the included tests and gain insight into its coverage.  It should be noted here that two of the four commands are not tested by default: "energy-since" and "rank-towers".  This is due to the fact that the developers named the JUnit tests incorrectly, and thus the tests are never invoked by junit.build.xml.  This mistakes led to the poor total coverage of only 21%.  Furthermore, the JUnit tests only exercises the "isValid()" command, while the main "run()" routine is never exercised in the tests.  The testing part of this project can substantially be improved.

The source code is properly documented and easy to understand.  However, it seems requirement capture was not rigorously exercised.  The suggested modular design in the specification is not followed, and the commands are glued together in a very inefficient manner: requiring re-instantiation of every modules for each user input.

Next, we examine the team participation.  Having prior knowledge that the team is following the "Issue-Driven Project Management" process, each members' contribution can be easily traced on the project's Issues page.  There are two incomplete issues: "Issue 36: Improve JUnit Tests" and "Issue 19: Provide error checking in Command Processor."  Thus, the developers are aware that their JUnit tests are incomprehensive.  The issues page suggest approximately equal distribution of contributions.

The Continuous Integration page for this project also tells a part of the development story.  The project is consistently worked on, and problems are promptly corrected.  It appears there was a major issue with their build system from build #13 to #35, during which they struggled over two days trying to find and correct the cause.  About 8 out of 10 commits were associated with an appropriate issue.

As a potential external developer, I can see many areas that can be improved upon.  The build system seems to have been a major stumbling block, thus one should examine it and make it rock solid prior to starting the actual code development.  Further, because some design specifications were not followed (i.e. modular design) new enhancements needs to change parts of the original code-base to be incorporated.  Furthermore, with the inefficient instantiation, one might have to replace the Main glue codes completely.

It is understandable that "requirement capture" was not strictly stressed in the "Issue-Driven" development approach.  But it surfaces eventually as a major issue in the validation process.  Remember, design the system before you implement it!

Tuesday, November 29, 2011

WattDepot Command Line Tool

It has been awhile since we started our discussion on being "Green" and the need to monitor and analyze our own energy usage.  We have been busy learning to use the WattDepot energy data collection system.  And we have began rolling out our own command line tool using that system: the Hale Aloha dorm energy data command line tool.


This is a pretty neat tool, allowing the user to gain access to the energy data without the need to learn to program!  Our goal was to make the tool modular and extendable.  We defined a Command interface for which all current and future command additions could implement.  With this standard interface, any new commands can be added to the system without changing any other parts of the system, such as the command Processor and Manager.  This is made possible by the on-the-fly, run-time discovery approach used by the command Manager to discover new commands implementing the Command interface.  You can try it out: add a new command and see it appear as a new option in the program!

We have learned quite a few new development techniques since last.  We learn to use the "Issues" tab on Google project to drive our design effort.  The term they use for this approach is "Issue Driven Project Management".  There were two other members in our TNT group: Ted and Josh.  We work really well together as a team thanks to this approach!

Another cool addition we picked up along the way was Jenkins!  No, Jenkins was not our butler!  But he sure kept us honest.  Whenever one of us submit a code that's poorly tested, Jenkins would know and point it out right away!  You will get a big red dot in your build history, and reminded in your mail box that you screwed up!  No, Jenkins was not mean.  Rather, he gave each of us a chance at redemption before the rest of the team finds out!  The best thing a team could expect from Jenkins is his weather report: sunny if the last 5 consecutive builds has no error, cloudy and stormy otherwise.

I have had a very positive experience with this project development techniques that I am adapting it for my other projects.  However, since my other project cannot be open-sourced yet, I had to look for replacement tools.  I found trac for issue tracking, and bitten for Jenkins' continuous integration.


Tuesday, November 1, 2011

The First Step toward being "Green"

The first step in finding a sensible solution to any problem is understanding.  This is applicable to all problems, scientific or otherwise.  Today, we'll examine the "Green" initiative that has gain large momentum recently.  So, what is being "Green"?  To me, "Green" means being sensible to the impact that we make as a species to the environment, and the legacy that we want to leave to our future generation.  What are our legacies?  Well, to simply put, it's things that we take for granted now, but would be miserable without: clean water, air, energy, and sun light.

It's humorous to think in these simple terms, rather than in terms of other things that currently have more green ($$$) value assigned to it.  However, let's face the simple truth: inflation and economic changes will change the currency value of things to come, but it will never change our needs for these basics: clean water, air, energy, and sun light.

Today, the focus is on energy.  Being "Green" on energy means we need to realize that our supply of fossil fuel is NOT unlimited.  Indeed, the conflicts of the modern era can be tied to the disputes around access to these limited supplies.  Yet, our sole consumption of this depleting resource is not regulated, but rather, increasing at an alarming rate.  Indeed, one measure of a developing nation's "progress" is its increase in fossil fuel consumption!  How, then, do we act as a species to be sensible?

Our first step: study and understand our energy consumption behavior, and apply that understanding to reduce our consumption in a sensible manner.  To do that, we need monitoring and measurement solutions.  Being able to monitor our energy usage and understand the role each pieces of that consumption means to our lives will give us insight into our energy "efficiency".  That is, are we using energy to achieve the optimal Goods?  Or are we wasting it unknowingly in some areas just because it is still "cheap" to do so?  Is it really "cheap" to borrow from our grand children's "energy fund"?  Looking at the U.S. National Debt Clock, I am hard pressed to hope otherwise.

Hawaii's isolation makes us very dependent on imports.  It is interesting to note that we import approximately $6 billions worth of energy per year.  Just imagine, how much can we achieve if we were to invest that amount of money toward developing our energy independence?  Short term savings?  Long term savings?  To our future, that investment would be a truer path to making our island home a paradise than any.


Tuesday, October 25, 2011

Collaborative Learning

Collaborative Mind Map

I rarely study for a test.  My excuse is that it's not really studying, but cramming.  After all, shouldn't one learn the material while it is being taught (at their own pace), rather than the night before a score is given?  The truth, however, is that not everyone has the time and interest to always be attentive during the learning process, and thus periodic reviews/tests are needed to prompt us to "study"!  So here we go...

The Experiment: Collaborative Review

Being that it's one of those few times that I actually study for an exam, I thought it would be interesting to try out new approaches.  What if, instead of meeting in a group and discuss possible exam problems, we do so online?  Each person would do a self-review of the materials, and post 5 possible questions and answers to their blog and share them:

What are the benefits of that?  Well, for one, we gain more perspectives on the materials.  Instead of reviewing from our point of view alone, we get to see others' subjective point of views as well (i.e. what they think were important materials and should be on the exam.)  See the mind map above for what we have merged together so far!

Here are my review questions and answers:
1. Why should you indent your code with two spaces instead of using the tab character?
A: We indent code with two spaces to ensure readability without taking up too much space.

2. Give an example of a CheckStyle error
A: Although it is difficult to differentiate between PMD and CheckStyle errors, one can generalize CheckStyle errors to those relating to the style/format of the source code.  For example, CheckStyle would recognize the following as errors:
a) Use of tab characters instead of spaces for indentation
b) Not ending the first sentence in a documentation block with a period
c) Line is longer than 100 characters

3. Give an example of a PMD error
A: Although it is difficult to differentiate between PMD and CheckStyle errors, PMD errors are usually more focused on source code semantics.  For example, PMD would recognize the following as errors:
a) An if() statement is always true or always false
b) A method is empty
c) An implementation type is used instead of the interface type (i.e. ArrayList instead of List)

4. Give an example of a FindBugs error
A: FindBugs checks the compiled bytecode for possible performance and correctness issues.  Possible problems/errors that it reports on are:
a) Declared class data fields that are never used in the class
b) Using Math.round() to round an integer-casted floating point value
c) Using incompatible bit masks in a comparison that always yields the same result

5. Give an example when an IDE could be "bad" for you
A: An IDE can rarely be "bad" for you, but the lack of an IDE, or too much dependence on the IDE, could potentially cause you to mess up a job interview.  For example, the FizzBuzz program is usually used to evaluate a potential interviewee's programming capability.  The question is, can you write a fully compilable version of the code without the help of an IDE?  Or will you need to go through multiple revisions to iron out syntax bugs which the IDE usually flags and correct for you?

Thursday, October 20, 2011

ToyBot is hosted on Google Project

After the competition, we started posting our individual robots onto Google Project for continued development and improvement.  If you would like to contribute, feel free to visit ToyBot's Google Project page.  I also posted my pre-packaged robot there, so feel free to download it and run it against yours.

If you are a developer and would like to contribute, go straight to the "Source" tab and grab a copy of the distribution using subversion.

I am sure there are many areas in which ToyBot can be improved on.  Hop on in and join the fun challenge!

Tuesday, October 11, 2011

Robot Rumble!!!



It is time!  The robocode competition is here, and we are all ready to have fun!  After spending hours going through the API source code, as well as trying some "Java cracking" attempts only to find the API being pretty secure, I settled on using the AdvancedRobot API.  I followed the robocode lessons and created an enemy robot tracker, which allows me to track other robots' movements and make prediction based on existing movement patterns.  The radar tracking code keeps all the robots being tracked in an array, and locks on to the first robot it sees.  The locking is done using the minimum radar turn assuming robot and gun is turned in the opposite direction.  Finally, the motion is a simple circular movement, with the added random reversal of direction whenever a continuous decrease of distance to target is observed.  Also, a "gutter" region around the battlefield is defined to try to keep the robot away from the wall, thus minimizing self-inflicted damage.

The competition is happening now!  In the meantime, here's some performance statistics against sample bots:

In one-on-one battle, the approach used was able to beat all the sample robots consistently.  One of the initial problem I had was with the "gutter" logic.  I initially set it to reverse every time it finds itself in the gutter. This is problematic most of the time because it ends up changing direction constantly and being stuck in the gutter.  This is fixed by putting a timestamp on each reverse, and set a period of timeout for consecutive reversal, thus allowing it enough time to exit the "gutter" condition.

Finally, like most of the students, I struggled the most with understanding the underlying robocode execution/event model.  The game physics page tries to explain the execution model, but is still unclear for in-depth exploitation.  In my attempt to go through the API sources and trace the event model, here is what I have as my notes:


Function calls per turn:
performLoadCommands()
  - fireBullets() => add bullets to battle field
updateBullets() => update and remove
updateRobots()
  - performMove()
    - updateGunHeat()
    - updateGunHeading()
    - updateRadarHeading()
    - updateMovement()
    - checkWallCollision()
    - checkRobotCollision()
    - update scan flag if moved or turned
  - performScan()
handleDeadRobots()
  - compute scores
  - update survival on the remaining bots
computeActiveRobots() => count how many is alive
publishStatuses()
  - energy, x, y, bodyheading, gunheading, radarheading, velocity, remaining moves/turns, gunheat, roundNum, time, etc.
wakeUpRobots()
  - waitWakeUp()
  - waitSleeping()


One needs to look at the event model in a cyclic manner.  During the last step, wakeUpRobots(), that's when your custom code is executed/evaluated.  I think that during your code execution, whenever you call a function that correlates to execution of an action (i.e. execute(), ahead(), turnRight(), turnRadarRight(), etc.) the action is registered and the robot code is put to sleep.  Then the event model starts evaluation from the top, by first updating bullet firing.  Note that in this case, for each turn, the standard Robot API only allows the registering of one of the various actions in the event model.  The AdvancedRobot API, on the other hand, have set*() functions to register multiple actions per turn before being interrupted and put to sleep by the call to execute().  I hope this notes could help the next generation of robocode hackers!  Enjoy and have fun!

Thursday, September 29, 2011

Ant Build System: An XML version of make

Once upon a time, I tried to practice Java by typing up my codes in vim on the terminal and running it there.  The problem?  I couldn't.  Java has gotten a bit more complicated than what I remembered.  Instead of a welcoming "Hello World", I get thrown an exception: java.lang.ClassNotFoundException.  That's just not friendly, especially for someone who's just trying to say hello!

Thus, I have been sticking with the eclipse IDE ever since.  It feels a bit handicapped, since I can't automate things on the terminal and use command-line programs such as make.  However, it was necessary to build my basic Java knowledge in a simple IDE before trying to tackle more advanced topics.  Luckily, we didn't have to wait long to learn about the Java-version of make and other automation tools: the Ant build system.

Ant is invoked on the terminal using the command 'ant', and by default, looks for an "XML makefile" named 'build.xml'.  To tell it to use other script file, you need to invoke it in this form: 'ant -f <script-file>'.  Today, we'll look at some sample XML script files to familiarize ourselves with ant:

1. A simple Hello World

<project name="HelloWorld" default="helloworld" basedir=".">
<target name="helloworld" description="print out Hello World">
<echo message="Hello World" />
</target>
</project>

The code above illustrates the minimally required XML elements for an ant script.  First, the 'project' tag define the project name and indicate the default target to execute.  In this case, we only have one 'target' in this script, and it's named "helloworld" and is the default target.  Within the "helloworld" target, a single 'echo' command is embedded to print out "Hello World" when the script is invoked by ant.

2. Immutable Properties

<project name="ImmutableProperties" default="printproperty" basedir=".">
<property name="my.property" value="1"/>
<property name="my.property" value="2"/>
<target name="printproperty" description="print out the value of the property my.property">
<echo>Value of my.property is: ${my.property}</echo>
</target>
</project>

The code above illustrates a new tag called 'property', which is used to define something similar to a named constant.  However, the catch is that properties are immutable, and thus the second definition of property named "my.property" is ignored.  The 'echo' line illustrates how one may access the value stored in a property, by enclosing it in the form: '${<property-name>}'.

3. Dependencies

<project name="Dependencies" default="foo" basedir=".">
<target name="foo" depends="bar">
<echo>foo</echo>
</target>
<target name="bar" depends="baz,elmo">
<echo>bar</echo>
</target>
<target name="baz" depends="qux">
<echo>baz</echo>
</target>
<target name="qux" depends="elmo">
<echo>qux</echo>
</target>
<target name="elmo">
<echo>elmo</echo>
</target>
</project>

The good old makefile had the ability to define dependencies and automate a whole hierarchy of dependent tasks.  The ant-version of that is similar, and uses the 'depends' keyword in the 'target' element definition.  The code above illustrates its use:
(a) by default, "foo" is the target, but because it depends on "bar", "bar" is executed first
(b) however, "bar" depends on two other targets, "baz" and "elmo", in that order
(c) "baz" yet again depends on another target named "qux"
(d) and "qux" depends on "elmo"
(f) finally, "elmo" doesn't depend on anything else, so it executes its content.  The dependencies is fulfilled, and the code unraveled back up the chain.
The final outputs are: "elmo", "qux", "baz", "bar", "foo", in that order.

4. Java Compilation using javac

<project name="HelloAnt" default="compile" basedir=".">
<property name="src.dir" location="src" />
<property name="build.dir" location="build" />
<target name="compile">
<mkdir dir="${build.dir}/classes" />
<javac srcdir="${src.dir}" destdir="${build.dir}/classes" includeAntRuntime="false" />
</target>
<target name="clean">
<delete dir="${build.dir}" />
</target>
</project>

Finally, we get to do the cool stuff: automating the build of your Java project.  The interesting thing to note here is this: if you named your package correctly, and placed the files in the correct package hierarchy within your "src" directory, all the codes are compiled automatically with a single ant command: 'javac'.  In the code above, we want to organize the compiled *.class files within the "build/classes" subdirectory, so we create that directory with the 'mkdir' command, and then tell 'javac' to use it as the "destdir".  A single line to compile your whole source tree, now that's an improvement over make!

Another target that's normally found in the makefile is "clean".  Luckily, because we generate our build within the "build" subdirectory, the cleaning task is simple.  Whenever the "clean" target is invoked (i.e. with 'ant -f compile.helloant.build.xml clean') the 'delete' command is executed on the "build" directory.

5. Java Execution using java

<project name="HelloAntExecute" default="run" basedir=".">
<import file="compile.helloant.build.xml"/>
<target name="run" depends="compile">
<java classname="edu.hawaii.ics613.helloant.HelloAnt" classpath="${build.dir}/classes" fork="true" />
</target>
</project>

So, we are back to the original question: how do you run a Java program once you get it to compile, without throwing java.lang.ClassNotFoundException, on the terminal?  It turns out that due to the hierarchy of the packaging of Java classes, you can't expect 'java' to take a direct path to the *.class file and execute it.  Instead, you need to invoke it with the full package name and provide it the class path to where the package was compiled to.  For example, assuming the build script places its compiled classes in the "build/classes" subdirectory, then you would execute them manually by typing:
    java -classpath build/classes <full-packaged-class-name>
In the code example above, we have a class named "HelloAnt" within the package "edu.hawaii.ics613.helloant", thus the full packaged class name is "edu.hawaii.ics613.helloant.HelloAnt".  The 'java' element line illustrates how the respective pieces normally required on the command line is provided to the 'java' command in the XML script.  You might have noticed that the property "build.dir" was not declared anywhere in this script.  This is because it is imported from the previous script using the 'import' syntax.

6. Java Documentation using javadoc

<project name="HelloAntDoc" default="javadoc" basedir=".">
<import file="compile.helloant.build.xml"/>
<target name="javadoc" depends="compile">
<mkdir dir="${build.dir}/javadoc" />
<javadoc sourcepath="${src.dir}"
destdir="${build.dir}/javadoc"
author="true"
version="true"
use="true"
package="true"
overview="${src.dir}/overview.html"
windowtitle="HelloAnt API"
doctitle="HelloAnt API"
failonerror="${javadoc.failonerror}"
linksource="true" />
</target>
</project>

The normal 'javadoc' documentation is generated in a similar fashion.  For more details on the syntax used, see the full documentation of 'javadoc': http://ant.apache.org/manual/Tasks/javadoc.html

7. Zip it up

<project name="HelloAntDist" default="dist" basedir=".">
<import file="javadoc.helloant.build.xml"/>
<import file="run.helloant.build.xml"/>
<property name="dist.name" value="ant-code-katas-toyl" />
<property name="dist.dir" location="${build.dir}/dist" />
<target name="dist" depends="compile,run,javadoc,clean">
<zip destfile="${dist.dir}/${dist.name}.zip">
<zipfileset dir="${basedir}" excludes="*.jar, lib/**, javadoc/**, bin/**, **/.svn/*, **/*~, tmp/**, build/**" prefix="${dist.name}" />
</zip>
</target>
</project>

The last step in an automated build system is the ability to package up the codes, after testing that it works, into a distributable archive.  In the code above, we used the 'zip' command.  The syntax is a bit more complicated due to our need to make the archive contain the project folder hierarchy.  To do so, you need to specify the "prefix" attribute of the 'zipfileset' element.  Packaged in this way, the archive will recreate a single folder specified by the "prefix" containing the whole project tree.

In conclusion, 'ant' is the Java-version of make.  It is a bit verbose due to its XML syntax.  However, it is still a good price to pay in gaining back the automation capability of a build system.

Tuesday, September 20, 2011

Gaming Challenge: Robocode!


Last Sunday, I spent 16 hours porting the memory scanner from Cheat Engine to a scripting language I use in windows for gaming automation purposes.  Yup, 16 hours!  I guess something clicked in my dream, and I woke up at 1AM and kept cracking at it for 16 hours straight!  Tired?  Not really.  It was fun!

FUN is the keyword.  If it's fun, I don't mind doing it more often.  Heck, if it's fun, I WANT to do it ALL the time!  The difference between being mentally tired from an 8-hour day-job, and being mentally enlightened from a 16-hour "hack and crack" is your attitude... and "Fun" is definitely the right attitude to have!

So, how about having FUN in an ICS programming assignment?  Sure!  The screenshot above is what we are doing in our ICS 613 class.  Nope, we are not designing a game from scratch.  Instead, we are competing in a robotic challenge called the Robocode.

Robocode is a Java-based competition arena for anyone interested in putting their minds and wits to the test.  The challenge?  Program your own robots and put your robots' intelligence to the test by competing against other robots.  The catch?  You need some Java know-how, be creative, and most definitely, time, to have fun!

The most challenging part of Robocode for me was trigonometry.  Yes, you need MATH to play smart!  In fact, all the good game engines out there uses Physics and lots of Math.  It was an interesting and refreshing experience to see programmers pulling out their pencils and scrible away on a sheet of paper instead of typing up codes.  I guess sometimes we forget: good software are designed and engineered.  Sometimes the  engineering notes all reside on the programmer's head (the easy ones).  But most of the time, programmers need scratch paper too.

Additional Links:

Tuesday, August 30, 2011

Saying Hello in Java: "FizzBuzz"

My first programming assignment, after 5 years away from Java, is the FizzBuzz program.  The program is suppose to print out 1 to 100, and replace the numbers that are divisible by 3, 5, and 15 with the strings "Fizz", "Buzz", and "FizzBuzz" respectively.  This task took me about 8 minutes and 30 seconds to accomplish:

package edu.hawaii.ics613;

public class FizzBuzz {
  public static String generateOutput(int i) {
    if(i%15 == 0)
      return "FizzBuzz";
    else if(i%3 == 0)
      return "Fizz";
    else if(i%5 == 0)
      return "Buzz";
    else
      return String.valueOf(i);
  }
  public static void main(String[] args) {
    for(int i=1; i<=100; i++) {

      System.out.println(generateOutput(i));
    }
  }
}

Something new which I didn't learn about 5 years back is the JUnit test facility.  Here is a simple test case for the FizzBuzz program above:

package edu.hawaii.ics613;

import static org.junit.Assert.*;
import org.junit.Test;
public class FizzBuzzTest {
  @Test

  public void testGenerateOutput() {
    assertEquals("Testing 1", "1", FizzBuzz.generateOutput(1));
    assertEquals("Testing 3", "Fizz", FizzBuzz.generateOutput(3));
    assertEquals("Testing 5", "Buzz", FizzBuzz.generateOutput(5));
    assertEquals("Testing 15", "FizzBuzz", FizzBuzz.generateOutput(15));
  }
}




Conclusion: I am still very rusty... need more Java polishing!

Monday, August 29, 2011

Proteus Cross Compiler and the LLVM Compiler Infrastructure

There is a new (6 days old) project on SourceForge that really piqued my interest: the Proteus Cross Compiler.  The project boasts the ability to generate Java code from GCC compatible languages such as C, C++, and Fortran.  Having had experience compiling C to MIPS assembly by hand, I began poring through the project home page and documentation.

At first, I was doubtful.  Many would-be language converters that I have looked at in the past failed at the insurmountable task.  What could make this project an exception?  I was pleasantly surprised to find that the task was reduced in complexity by taking advantage of another interesting, well-established project: the Low Level Virtual Machine (LLVM) compiler infrastructure.  The LLVM project takes care of the various GCC language front-ends, and spits out an optimized intermediate form which is then used by Proteus to generate Java code.  Thus, in a way, Proteus is just an LLVM to Java converter.

After being convinced of the reduced complexity of the task, I was quick to start setting up my environment to test out the system!  Looking through the worked-out examples, I figured it was best to try it on my Ubuntu virtual machine instead of directly on my OS X Lion, which would require me to manually compile many dependent packages.  On Ubuntu, installing the supporting LLVM and GCC front-end was really easy:

sudo apt-get install llvm-2.7 llvm-gcc-4.5

Finally, it is time to play!  I clicked on the "files" section to download the project, and the only thing there is a "readme.txt" file.  The 6-day old project does not have the files uploaded yet...  I guess we will just have to check it out next time!

One closing trivia: is anyone aware that Apple is using LLVM?  Check it out: http://developer.apple.com/technologies/tools/