Report

Whack-A-Robot
A Behavior Based Game

FrontPage.jpg

Jeppe Haugaard Sørensen (20081698)
Jakob Koed Holm (20081273)
Falk Olaf Altheide (20111270)

http://lego.wikidot.com/

Table of Contents

1) Choosing the Project

Date: 02/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

1.1) Goal

The goal of today is to choose and describe the end course project.

1.2) Plan

  • Find and describe three projects.
  • Select one of the three projects as the end course project.
  • Make a plan for the end course project.

1.3) Results

1.3.1) First Project Proposal - Animal Behavior/Robot Game

suggestions.jpg

Description:
A predator-prey setting with one predator robot and 5-8 prey robots. The predator is remotely controlled, and the preys drive around autonomously.
The predator is be big and slow and the preys are small and fast.
The preys search for food while avoiding each other, other objects and the predator. When the predator approaches they scream and scatter.
If one prey hears another prey screaming, it panics. Different sounds indicate different behaviors.
The predator is able to kill a prey robot by hitting it on the top.

Hardware/Software Requirements:
Each robot have to be build around a NXT, so around 7 NXTs are needed.
The prey have two touch sensors, a microphone and a color sensor, and two motors for driving.
The predator needs no sensors as it is remote controlled, but two motors for driving and one for hitting the preys.
The predator is remotely controlled through bluetooth by a pc or a smart phone.

Software Architecture:
The preys use a behavior-based architecture for controlling their actions, and all processing would be done on the NXTs.

Challenges:
Make the prey move autonomously and making a proper avoiding behavior could be a challenge.
It could be tricky to make the preys navigate using only the touch sensors. A solution to this could be to make some sort of inner map of the environment made with tachocount.
Detecting sounds of different frequencies can be a problem.

Presentation at the End:
A presentation would be showing someone playing the game, i.e. controlling the predator and hunting the preys until all prey are dead.

1.3.2) Second Project Proposal - The Lost Robots

Collaborating, map building, communication.

Description:
Three robots with different skills collaborate in solving a task. The task can be finding different colored objects in an environment and returning them to some spot in a certain sequence.
One robot has the task of mapping the environment and locating the objects, another robot grabs the objects and give them to the third robot which transports them to the goal zone.

Hardware/Software Requirements:
Three NXTs are needed, one for each robot, each equipt with motors for driving and a ultrasonic sensor for avoiding objects. A color sensor is needed to detect objects, a touch sensor and a motor for grabbing and finally a touch sensor on the transport robot.

Software Architecture:
All processing would be done in a behavior-based architecure on and between the NXTs. The NXTs communicate using bluetooth.

Challenges:
The issues are communication between the robots and mapping an environment correctly.
The accuracy of the map could be improved if a controlled environment with some sort of grid is used.

Presentation at the End:
Showing the three robots solving a task by collaborating.

1.3.3) Third Project Proposal - The Driverless Car

Mapping, navigation, smart-phone.

Description:
A driverless car drives around a city (of lego roads) and maps the environment and takes pictures of the whole city, and matches the pictures to related spots (mini google streetview car).

Hardware/Software Requirements:
A light sensor and an ultrasonic sensor for navigation. Using the tacho-counter to help mapping and a smart phone for taking pictures.

Software Architecture:
The robot would be controlled with a behavior-based architecture and communicates with a pc via bluetooth.

Challenges:
The biggest challenge here is mapping an environment correctly using the tacho-counter.

Presentation at the End:
The robot drives around the track, making the map and sending pictures and the map to the pc.

1.4) Conclusion

We ended up choosing the first proposal, because it sounds fun and it has a lot of challanges. The project is easily extendable if we have a lot of time, e.g. we could make the predator move autonomously or making the preys move slower if they haven't eaten for a long time or similar more advanced behavior.

1.4.1) The Overall Plan

  • Experiment with the sensors.
  • Designing the architecture of the prey robots.
  • Start building and testing one prey robot.
  • Design and build the environment.
  • Make a prey robot move around in the environment.
  • Make all the prey robots move around and interact with each other in the environment.
  • Build the predator and implement remote control.
  • Make the preys interact with the predator.

2) Test of Sensors

Image of the focus of the lesson

Date: 17/05 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 4 hours

2.1) Goal

The goal is to test the different sensors expected to be used.

2.2) Plan

  • Test the infrared (IR) sensor.
  • Test the touch sensor.
  • Build the overall structure of the prey-robot.

2.3) Results

2.3.1) IR Sensor

To interact with the IR sensor the IRSeekerV2 class is used. The IR sensor has five sensors inside it, which enables the sensor to detect the direction of the IR signal.
The IR sensor has two modes; AC and DC [4]. To reduce the interference from sunlight the AC mode is used. There still is some background measurements when no IR sender is present, and the readings differ from zero to ten (approximately).

The following snippet shows a few ways to interact with the sensor.

// Initialize the IRSeekerV2 with the sensor port and the 
// AC mode.
IRSeekerV2 irSeeker = new IRSeekerV2(SensorPort.S1, Mode.AC);
 
// The getDirection returns the direction (1 to 9) or 0 if 
// there is no target.
direction = irSeeker.getDirection();
 
// The getSensorValues returns the readings from all five 
// sensors in an array.
int[] sensorValues = irSeeker.getSensorValues();

2.3.2) Touch Sensor

The original idea was to use two touch sensors in the front of the prey robots to detect when the preys bump in to each other. After minor tests of the angle of the push to the touch sensor, the conclusion was that it would be better to use an ultrasonic sensor.

2.3.3) The Prey Robot

IMAG0319.jpg

The ideas behind the prey-robot was the following:

  • Small and compact is better than making a big and long robot.
  • 'Animal' like robot.
  • Easy to detach the NXT.
  • Using the enter button as an upper directed touch sensor.
    • So the predator can smack the prey at the top to kill it.

It is a bit cramped when attaching the wires to the sensors, so shorter cables could be a nice thing.
Perhaps the sensors needs to be moved a bit further forward to make room for the cables.

2.4) Conclusion

Next time the ultrasonic sensor and the color sensor should be tested and implemented.
The prey should be designed and implementation of the prey behavior started.

3) Test of Sensors 2

Image of the focus of the lesson

Date: 23/05 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 7 hours

3.1) Goal

The goal for today is to test the ultrasonic sensor and the color sensor and implementing a prototype prey robot.

3.2) Plan

3.3) Results

3.3.1) Ultrasonic Sensor

To test the Ultrasonic sensor the measured distance was printed on the LCD:

UltrasonicSensor us = new UltrasonicSensor(SensorPort.S1);
 
LCD.drawString("Distance(cm) ", 0, 0);
 
while (! Button.ESCAPE.isPressed()) {
    LCD.drawInt(us.getDistance(),3,13,0);
    Thread.sleep(300);
}

Although it is mounted lower on our prey robot then on the default LEGO car, the test ran fine and the sensor works like expected.

3.3.2) Color Sensor

The color sensor was not available so the test must be done at a later time.

3.3.3) Prey Behaviors

The behavior based architecture of the robot is inspired by [1].

The different behaviors of the robot are

  • Avoid edge (Color sensor)
  • Avoid obstacles/other robots (Ultrasonic sensor)
  • Run away from predator (IR sensor)
  • Eat food (color sensor)
  • Wander around

(optional there could be a behavior "Listen for screams" using a microphone)

Choosing which behavior to perform should be motivation based. I.e. a detected obstacle which is really close to the robot should lead to a higher motivation to avoid it than one that is far away.
The different behaviors are implemented extending a superclass Behavior and are then called by an Arbitrator. Each behavior has a method getMotivation and a field with an unique priority. The robot is performing the action of the behavior with the highest motivation. If two behaviors have equal motivation the priority decides which action to perform. Arbitrator and behaviors are set to be daemon threads, so they run until the main thread is terminated.

Here is a list of all the actions a prey robot can execute and the sensor reading triggering the action

Action Motivation Sensor used Sensor value
Wander around 1 - -
Avoid obstacle 25 Ultrasonic <30
Avoid obstacle 50 Ultrasonic <20
Avoid obstacle 100 Ultrasonic <10
Back off (to avoid) 50 - -
Move to one side (to avoid)
Run away 25 IR 50
Run away 50 IR 100
Run away 100 IR 150
Avoid Edge 100 Light sensor >38

The arbitrator looks for the behavior with the highest motivation (priority as tie breaker). If this is different from the current behavior, then the old behavior will be stopped and the new one will be started. If it is the same, it will only be restarted if the motivation is higher than before and the behavior is self-interruptible.

3.4) Conclusion

The arbitrator and the prey behaviors were implemented, and the ultrasonic sensor was tested.
We had no color sensor, but we do expect to receive this for the next session, so this should be tested.
Next time we should figure out a way to interrupt a behavior even though it might be in the middle of an action, and the robot should be run on a test track.

4) Implementing the Prey Robot

Date: 24/05 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 6 hours

4.1) Goal

Implement the prey robot and test it on a simple track.

4.2) Plan

  • Use exceptions to interrupt the currently running behavior at any time.
  • Test the color sensor.
  • Run the robot on a simple test track.

4.3) Results

4.3.1) Prey Behavior Architecture

Exceptions are now used to interrupt behaviors. This allows to stop running behaviors at any time without waiting for running methods of the behavior to be finished. For example the RunAway or AvoidObstacle behavior should be able to interrupt other behaviors or even itself if the predator appears or the robot runs into something. By throwing a InterruptBehaviorException it is now possible to instantly interrupt a behavior.

The run method of the Behavior class.

public void run() {
    ...
    while (true) {
        if (isRunning()) {
            try {
                action();
            } catch (InterruptBehaviorException e) {}
        }
        ...
    }
}

Additionally a behavior can now dynamically change if it is self-interruptible or not while it is performing the action. If the robot drives to the border of the track and the AvoidBorder is self-interruptible the robot appeared to be standing still because it was constantly interrupting itself. When the robot had left the border and was still performing the action it should be self-interruptible in case it came to the border again (e.g. if it is in a corner). Therefore the AvoidBorder is only self-interruptible once it is off the border.

4.3.2) Test on a Track

The robot was tested on a simple track of light tape on a dark ground. Using the light sensor the robot is able to "see" the border and avoid leaving the track while wandering around. A problem is that the robot can get over the border while it is backing off trying to avoid something. So the robot has to "know" if it is approaching the border forwards or backwards.

ColorID Color
0 red
1 green
2 blue
3 yellow
4 magenta
5 orange
6 white
7 black
8 pink
9 gray
10 light gray
11 dark gray
12 cyan

4.3.3) Color Sensor

The color sensor can be accessed using the ColorSensor class [5]. The method getColorID gives the ID of of the measured color (see table to the right).

Using the color sensor a behavior EatFood is implemented. There are colored spots on the track represening food. While the robot is wandering around, its hunger increases and when it happens to be on food, the hunger defines the motivation to stop and eat. While it is eating, the hunger decreases, so at some point, the robot will move away from the food again.

4.4) Conclusion

After this session all the behaviors of the prey worked. There need to be some fine tuning on the "real" track, but right now the robot is able to stay on the track, wander around to eat food and avoids obstacles and if it sees the predator it is scared as hell and runs for its life.

We still need to build the other three preys as well as the predator.

5) Developing the Prey

Date: 04/06 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 8 hours

5.1) Goal

The goal of today is to make the prey more animal-like, concerning both the head-movement of the prey but also in accordance to [1].

5.2) Plan

  • Finish the construction of the prey robot.
    • Figure out a solution for the back wheel.
    • Mount the cables for the sensors and motors.
  • Build the other three prey robots and see them interact.
  • Experiment with the head turning.
  • Implement an InternalState class for the prey, containing information about sleep, hunger and so on [1].
  • Build the predator.
backwheel.jpg

5.3) Results

5.3.1) The Construction of the Prey Robot

5.3.1.1) The Back Wheel

There may still be modifications to the robot, but there should be a working solution for the back wheel, that allows it to turn 360 degrees without the wheel sticking out (which makes the robot look less like an animal). 

  • The first try was having only one wheel and have the wheel stick out a bit, but the wheel was only able to move 180 degrees, and it looks less like an animal seeing that the wheel looks like it is attached to the robot and not like it is a part of the robot.
  • The second try was was doing it the simple way that had worked before, having two wheels (see picture). since the wheel is more directly under the robot this comes at the price of some of the stability.

5.3.1.2) The Head

The cables for the motors and the sensors should be mounted which will introduce some foreseen problems, since the head of the robot has to be able to move, and cables for some moving part will always be a challenge.

Attaching the cables and using cable strips solved the problem (so far).

5.3.2) Building the Preys

Sorting the new boxes and building the additional prey robots was done with much childish joy.

buildingpreys.jpg

5.3.3) Head Turning

The idea behind the head turning was that the prey is able to turn its head to either side, to make the sensors have a wider range, and make the robot look more animal like.

The major issue with rotating the head is that the cables are non-flexible which raises a concern if this is at all possible.
It turns out that using longer cables reduces the problem.

To implement the movement of the head the NXTRegulatedMotor [6] with RotateTo is used.

In the Mover the HeadTurner class is used, so that whenever the robot is moving forward or backwards it is looking the way it is moving. This might have some implications since some of the previous implementation might have assumed that the robot is looking straight ahead.

public static void forward(int leftPower, int rightPower) {
    int diff = leftPower - rightPower;
    if (Math.abs(diff) <= 5) {
        HeadTurner.centerHead();
    } else if (diff < 0) {
        HeadTurner.turnLeft();
    } else {
        HeadTurner.turnRight();
    }
    ...
}

5.3.4) The Predator

The first design idea for the predator is to make it look - more og less - like a scorpion. The motor has problems lifting a long and heavy scorpion tail, so the design has to be remade.

5.4) Conclusion

We did not have time to implement the InternalState and we push this to the next session. The head moving should be used in more behaviors than the wanderer. Perhaps the prey should sometimes stand still and turn its head to use the extra range of the sensors. At some point the different behaviors should be properly documented as well as most of the code.

6) Programming the Prey Robot

Date: 05/06 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 8 hours

6.1) Goal

Improve the prey by adding an internal state [1], and program the remote controlled predator.

6.2) Plan

  • For the prey
    • Implement an InternalState class (see [1]).
  • For the predator
    • Programm it to be remotely controlled by a human user on the pc.

6.3) Results

6.3.1) Implementing InternalState Class

The internal state of a prey is meant to contain information about the internal 'sensors' of the robot, such as hunger, sleep, happiness, scaredness, panic and so on.
The importance to perform actions like "avoid border" or "eat food" is determined by motivation values. Before the motivations were all in an interval from 0 to 100. For some actions the interval should be different. I.e. the motivation for "Eat food" is now in an interval from 0 to 90, since it should not prevent the robot from avoiding close obstacles.

6.3.2) Behaviors

Behaviors have the following motivations.

Behavior Motivation
eatFood 0 - 90
avoidEdge 0, 50 (while avoiding), 100 (on edge)
avoidObst 50 (while running), max(0, 2.5*(15 - dist)
runAway 0, 50 (while running), 100 if IR value > 170

6.3.2.1) Improvements to Behaviors

AvoidObstacles
Now the robot backs up and turns in the same direction the obstacle was detected at. So when it continues to go forward it drives away from the obstacle.

RunAway
If the predator is detected, the robot turns 180° and runs away at full speed. Then it turns around again and looks out for the predator.

Wander
The wandering was changed, so the robot coveres a wider area of the track. Before it was just turning in smaller circles and did not really wander.

6.3.2.2) Problems

After implementing the internal state, the readings of the color sensor turned out to be very unstable. Sometimes the sensor even showed -1. First this seemed to be a physical problem (shadow, sensor to close to the floor) but it turned out to be an issue in the code. Debugging it, the following results were found:

  • The problem occurs only during "eat food".
  • If returning a constant motivation of 0 the sensor seems to work correctly
  • If eatFood and avoidBorder are the only "active" behaviors, the sensor works.
    • that leads to the assumtion, that there are too many threads (more than 8?) running at the same time
  • When calling the rotateTo methods of the motors (in the HeadTurner and setting it to return immediately, there are more threads started in the background which led to exceeding the maximum number of threads.

To reduce the number of threads and usage of memory, the way rotateTo is called is changed. Instead of calling rotateTo in the HeadTurner every single time a movement order is given to the Mover class, it only calls rotateTo when changing the direction of the head.

public static void turnRight() {
    ...
    if ((regMotor.isMoving() && lastDirection != RIGHT) 
        || !regMotor.isMoving()) {
        regMotor.rotateTo(turningDegrees, true);
    }
    lastDirection = RIGHT;
}

Afterwards the sensor worked as it should.

6.3.3) The Predator

The original idea for building a predator that hits a touch sensor on the prey robots did not turn out to be good. Instead the focus was changed to making the predator push over the prey robot, and the prey can maybe detect if it has been pushed over and stop executing. This means that the design of the predator has to be changed.

The controlling of the predator is intended to be done via bluetooth, but connecting to the NXT using bluetooth was difficult and did not succeed, and has to be tried on a different computer next time.

6.4) Conclusion

If we had time enough, we could implement a variable to indicate the robots fear in the internalState. This could be used to refine the motivation for running away (similar to hunger for EatFood).

Next time the predator should be re-designed, and the bluetooth communication to the predator should be tried with a different pc.

6.5) Movies

The first movie shows the preys driving around without the predator.

In the second movie the effect of the IR ball is tested.

7) Building the Predator

Date: 06/06 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 7 hours

7.1) Goal

Building the Predator robot and implementing remote control.

7.2) Plan

  • Building the Predator robot.
  • Implement remote control of the predator via bluetooth.
  • Let the predator interact with the preys.
  • Find a way the predator can "kill" the preys.

7.3) Results

predator.jpg

7.3.1) Building the Robot

The robot is build with the turning wheel in the front unlike the prey robots. This provides some problems when driving backwards, the robot does not drive straight backwards but drives slightly to one of the sides. The robot has a flipping mechanism in the front for tipping the prey over.
The plan is to make the robot look evil by adding som "spikes" all over it, the end result is a robot that looks like an evil deranged elephant.

7.3.2) Control the Robot via Bluetooth

Using the RemotePilot example in the lejos.pcsamples it was possible to connect and control the robot with bluetooth.
The example provided some base code to develop further implementations of controlling the robot over bluetooth.
The example uses the DifferentialPilot [7], but instead the Mover class is used.

The KeyListener interface is used to listen for which keys are pressed and released.
There were some issues getting keyboard focus, so code from the KeyEventDemo, to make a GUI, which hold keyboard focus.
It turns out that on linux holding a key will result in multiple keyPressed and keyReleased events, while on a windows machine holding a key will give multiple keypressed events but only one keyReleased event. This gave some problems because a linux computer is used to control the predator.
This was solved by letting the robot go forward when "up"-key is pressed, and it keeps going foward until another key is pressed. (Kind of like controlling the snake in the game "Snake" on Nokia cellphones).

7.3.3) Killing Prey

To kill the prey it was envisioned that the prey somehow could deteremine if it was lying down, and then turn itself off. The idea was to use the color sensor, and when it reads black for a longer duration it is safe to assume that the prey is laying down. The problem is that the floor intended to be used for the final presentation registers as black on the color sensor.
This was avoided by making a contraption that pushes the enter button on the NXT to kill it. This means that the predator has to tip the prey over and then ram it to kill it.

7.3.4) Interaction

The predator interacts fine with the prey and has no problem tipping them over, pressing the top on the other hand prooved troublesome because the prey keep moving while laying down. This means that the preys are hard to kill, and something may need to be changed so that it is easier.

7.4) Conclusion

We added a BackBumper behavior that depends on a touch sensor mounted in the back of the prey. This proved very easy to implement in our existing framework and took only 5-10 minutes to do.
The prey implementation was refactored during today's lab, where all names of variables were changed to more intentional names, which makes the code easier to read. Comments were added to most of the code as well.

8) Finalizing

Date: 11/06 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 6 hours

colorsensor.jpg

8.1) Goals

Finalizing the project and the presentation.

8.2) Plan

  • Calibrate sensors
    • Color sensor
    • IR sensor
  • Add Sounds

Optional

  • Tweak values
    • make the prey robots more animal like
  • Prepare presentation
    • Think about, what to say
    • Create some slides
    • Prepare "backup" video
  • Make graphs
    • Motivation graph
  • Improve handling of the predator

8.3) Results

8.3.1) Calibration

In previous tests, one of the color sensors did not work as expected. By comparing the readings with a sensor that performs well when mounted on a robot, it turns out that the values for getLightValue differs by about 40 depending on the color.
To solve this problem calibration is attempted, but it turns out that calibration of the sensor is not that easy. There are methods called calibrateHigh and calibrateLow in the ColorSensor [5] but these have no effect on the getColorID method.
The next attempt is using the readRawValue instead of getColorID. The sensors is calibrated while the prey robot is on food and on the border, then a range around these values is used to determine whether the robot is on a colored surface or not.
This turns out to be more unstable than the using the getColorID, so this approach is dropped.
Using the four values (red, green, blue, background) from getRawColor does not work that good either.
In the end getColorID is used, because it works the best (even without calibration). The colors red (for the border) and blue for food on a dark brown table is used. For these colors the sensor reading are robust enough.

Reading the raw-values from the color sensor.
calibrating.JPG

8.4) Conclusion

The ColorSensor class has several methods to get readings from the sensor and it is really hard to figure out how these methods measure and how calibration and settings of the sensor influence them. The source code for the entire ColorSensor class heavily looks like work in progress and is really hard to understand.
We did not have time to do the optional goals, and this moved to next lab session.

9) Final Tests

tweaking.jpg

Date: 12/06 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 7 hours

9.1) Goal

Make the final tests before the presentation.

9.2) Plan

  • Test sensors at 11 o'clock.
    • Hopefully the lighting will be the same as on the presentation day.
  • Run a full test to see what needs changing.
    • Tweak values to make the preys behave more animal like.
  • The cables still causes small problems, and they look weird.
  • Sounds for the preys and the predator needs to be modified.

9.3) Results

9.3.1) Predator Sounds

The following sounds are added to the predator:

Event Sound
Connects Play 'Imperial March'.
Hits Hammer sound.
Exit Evil laugh.

9.3.2) Prey Sounds

To make it easier to play sounds for the prey a SoundPlayer class has been implemented. This class has the responsibility to play all sounds, and has methods for each behavior e.g. playEatFoodSound (int d). The d (for duration) is not a part of all the methods, but if it is, the behavior uses the play[behavior]Sound (e.g. playEatFoodSound) to wait while driving or turning. In this waiting period the behavior can still be interrupted if another behavior has higher motivation.
The following sounds are implemented:

Behavior Sound
Eating Blip-blop rising in frequency as the hunger decreases.
Running away Scream like crazy.
Hitting bordor Making a sound while on the border.
Backbumper blip-blop with a relative high frequency and high occurance.
Avoid obstacle Makes a sound at the start of the behavior.
Wander around Makes no sound.

9.3.3) Full Test

A full test of all preys and the predator interacting on the track, the following issues arose during the test:

  • After a prey robot is done eating, it seems to think the back bumper is pressed.
    • The motivation value shown in the display looks as if the robot is running forward after the back bumper was pressed, but it sounds like the back bumper is pressed all the time.
    • This might have something to do with the delay that is built in to the SoundPlayer.
    • It could also be a problem that was there all along, i.e. before the SoundPlayer was implemented.
    • After removing all the sound-playing-mechanisms the error is still there, so it must be a problem that was there all the time.
    • It could be a thread problem, but removing some of the behaviors still caused the problem.
    • The problem seemed to be that when two behaviors had the same motivation, and the new behaviors motivation was 0, the priority did not matter.
    • The problem was solved by adding that a behavior has to have a higher motivation than zero to be able to be run by the arbitrator.
public void run() {
    ...
    while (running) {
        ...    
        /*
         * If no previous maxBehavior exist or the new 
         * behavior has higher motivation or if to behaviors 
         * are tied for motivation and the new has higher 
         * priority.
         */
        if (maxBehavior == null 
            || motivation > maxMotivation
            || (motivation == maxMotivation 
                && maxBehavior.getBehaviorPriority() > 
                    currentBehavior.getBehaviorPriority()) 
                && motivation > 0) {
            // A new behavior has taken the lead.
            maxBehavior = currentBehavior;
        }
    }
    ...
}
  • When the preys get scared from seeing the predator, they might run across the border without noticing.
    • This is because the readings from the color sensors are not that reliable, as the sensor depends much on the surrounding light.
    • The sensors were adjusted to be a few millimeters above the track. This works best at the time of day the presentation is to be.

9.4) Conclusion

After debugging and tweaking the prey robots are better at staying inside the track and behave more animal like. The only thing missing is to prepare the final presentation.

The preparation was done the next day, and will not be part of the report, but the some of the diagrams can be seen in the final conclusion.

10) Final Conclusion

10.1) End product

The final design of the prey and the predator.

prey_profile.JPG predator_profile.JPG

A video showing the game in action.

10.2) Description of the Architecture

The motivation networks as described in [1] was the basis of this entire project, and it was chosen since the biological inspiration to the behavior based approach suited a game where preys are autonomous agents hunted by a predator.

10.2.1) Arbitrator

The prey uses an arbitrator that calculates which behavior is the most important to perform. It does this by getting the motivation of each behavior and thereby ranking them individually. In the case of two behaviors having equal motivation, the unique priority of each motivation defines which behavior takes precedence.

BehaviorArchitecture.jpg

The arbitrator chooses between six behaviors (see the above diagram) and the value of each motivation function is based on the input from the sensors or the internal state. The prey then performs the action related to the chosen behavior thereby following the sense-plan-act.

10.2.3) Behaviors

10.2.3.1) Avoid Obstacle

To avoid driving head first in to an obstacle, such as another prey or the predator, the input from the ultrasonic sensor is used and if the readings get below 15cm the motivation rises as shown in the graph.

motivation_avoid.png

The function used to calculate the above graph is

$\qquad \qquad \qquad \qquad (15\text{cm}-\text{distance})\cdot \frac{100}{15\text{cm}}$

When an obstacle is detected the prey performs an avoidance maneuver and to have a chance to complete this maneuver the motivation function has a value of 50 while performing the maneuver. If an obstacle is detected close enough while performing the avoidance maneuver, the result of above equation becomes higher than 50 and this is then the value of the motivation. The behavior will then interrupt itself and restart the avoidance maneuver.

10.2.3.2) Eat Food

When the prey is standing on top of food (detected by the color sensor) the motivation value of the EatFood is directly proportional with the hunger value from InternalState.
motivation_eat.png
The slope is calculated by

$\qquad \qquad \qquad \qquad \text{slope} = 0.9\cdot \text{hunger}$

Whenever the prey is not on food the hunger increases, but the motivation value remains zero. When the prey is right on top of food the motivation for EatFood is as described above. If the EatFood behavior is the dominant behavior the prey stands still, eating and the hunger decreases.

10.2.3.3) Other Behaviors

The motivation of the Wander behavior does not need neither a graph or an equation to be described. The motivation value is simply one all the time. This makes the Wander the default behavior and it is only active when all other behaviors have motivations less than 0.

The AvoidBorder, BackBumper and RunAway behaviors all have different avoidance maneuvers, but their motivation functions are similar.
When something is detected the motivation is 100 - respectively: color sensor detects border, touch sensor is pressed or IR readings are above threshold. When a prey is performing an avoidance maneuver (running) the related motivation value is 50. If nothing is detected and the behavior is not running the motivation value is 0.

motivation_general.png

10.3) Problems

During the end course project we encountered some difficulties which we somewhat solved.
Building the preys went smoothly but when the sensors and sensors was connected to the NXT, the connecter cables proved troublesome. The cables take up a lot of space and because they are very stiff it was hard to make the head turn. We solved this by tugging the cables the away in the free space under the NXT and leading all the cables from the head upwards and securing them with cable ties. We tried using short cable for the sensors on the head but this made it unable to turn because of the stiffness of the cables. The final solution is a "funny" looking robot that is able to turn its head with almost no resistance.

The color sensor was the cause of a lot of frustration especially during the final preparations. It was good at detecting different colors but it depended to much on the surrounding light. We tried reducing the distance from the sensor to the surface and it seems to give the best results being a few millimeters from the surface. We then tried to shield the sensor from background light and turning on an of the floodlight, but neither improved the readings. Finally we tried to calibrate (as previously described) the sensor each time the program was executed but after some investigation of the source code for the ColorSensor [5] we found that the calibration methods do not influence the getColorID method, which we use to get color value.

We had difficulties figuring out how the predator was supposed to kill the preys and we went through a few possible solutions before settling on hitting the prey on the top. We ran out of sensor ports on prey and therefore used the enter button as an extra touch sensor. It could be nice to just tip the preys over to kill them, then each prey would need a way to detect if it is lying on the side. To detect this we tried the color sensor but it just read black, which could not be differentiated from some parts of the track. We tried other ways to detect this with the buttons and the ultrasonic sensor but neither could solve this. The current solution is working but it is still far from perfect since the predator has to hit the prey right above the enter button to kill it, and it is difficult being that accurate with the predator.

10.4) Improvements and Future Work

10.4.1) Color Sensor

To make this game work better, detection of the border needs to be improved. The color sensor works most of the time and to improve the color readings the raw values should be used in combination with calibration. Another solution would be to use a touch or ultrasonic sensor and having a physical wall as the border.
If we would continue to use the color sensor, the environment condition should be improved. Somehow the lighting condition would have to be controllable, either by having the track in a room with no natural light or somehow shielding the color sensor from surrounding light. Using easier distinguishable colors would also improve the color sensor readings.

10.4.2) Killing the Prey

Making the preys easier to kill would also improve the game significantly. Instead of using the enter button to kill a prey, a "real" touch sensor could be used by either moving the one currently used for the BackBumper or find a way to attach more touch sensors to one sensor port. The touch sensor mounted in the back is actually not activated that much during an average run so it could maybe be removed without effecting the preys' handling to much.

10.4.3) Controlling the Predator

Currently controlling the predator can be a bit difficult to learn, as it continues to performed the last action that was executed until it is told otherwise. Making the controlling more intuitive would make it easier for anybody to control the predator. Controlling the predator with a cellphone instead of a laptop would improve the "remote controlled car" feeling and will make the controller more mobile. If the predator is controlled by the built-in accelerometer in the phone, the user does not have to look on the controller to control the predator but can focus on the game.

10.4.4) Internal State

If we had more time, we would like to increase the influence of the InternalState on the preys' behaviors.
We could add a "fear" factor to the InternalState that would determine how skittish the prey depending on how often it has seen the predator. Other possible extension to the InternalState could be tiredness/energy level and loneliness. All these factors should influence the performed action, for example make the robot move slower when the energy level is low or it has not eaten for a long time.

10.4.5) Individual Properties

Right now all prey robots are alike with the same max speed, detection thresholds etc. To make the robots behave differently the properties of each robot could be set individually, thereby making some kind of Darwinian experiment where the fittest survive for the longest time.

10.4.6) Animal Behavior

The current prey robots are not very social because of the avoidance behaviors and the do not have way of acknowledge each other. By communication over bluetooth we could implement some sort of flock behavior, where the preys can tell each other about food resources, the position of the predator and their own position.
If the preys work together it makes them harder to kill, and on the other hand if they use this information to socialize it will make them more animal like. To make them seem more animal-like, the sounds, the movement and the appearance could be improved concerning these aspects.

10.5) Conclusion

We achieved the goals of our initial project proposal with some minor changes, but we have a lot of ideas for future improvements. The behavior based approach turned out to be very useful, not that difficult to implement and even easier to extend. The dependence of sensor readings influenced by environmental factors turned out to be a bigger problem than we initially expected, in our case the color sensor proved the most troublesome.
The final presentation of our project showed us that there is a lot of interest in this type of interactive robot game, which encourages us to continue to work on this idea or similar projects.

11) References

one_pixel_delimiter.png
watchingrobot.jpg
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License