20) Final Conclusion

10) Final Conclusion

10.1) End product

The final design of the prey and the predator.

prey_profile.JPG predator_profile.JPG

A video showing the game in action.

10.2) Description of the Architecture

The motivation networks as described in [Bibliography item krink not found.] was the basis of this entire project, and it was chosen since the biological inspiration to the behavior based approach suited a game where preys are autonomous agents hunted by a predator.

10.2.1) Arbitrator

The prey uses an arbitrator that calculates which behavior is the most important to perform. It does this by getting the motivation of each behavior and thereby ranking them individually. In the case of two behaviors having equal motivation, the unique priority of each motivation defines which behavior takes precedence.

BehaviorArchitecture.jpg

The arbitrator chooses between six behaviors (see the above diagram) and the value of each motivation function is based on the input from the sensors or the internal state. The prey then performs the action related to the chosen behavior thereby following the sense-plan-act.

10.2.3) Behaviors

10.2.3.1) Avoid Obstacle

To avoid driving head first in to an obstacle, such as another prey or the predator, the input from the ultrasonic sensor is used and if the readings get below 15cm the motivation rises as shown in the graph.

motivation_avoid.png

The function used to calculate the above graph is

$\qquad \qquad \qquad \qquad (15\text{cm}-\text{distance})\cdot \frac{100}{15\text{cm}}$

When an obstacle is detected the prey performs an avoidance maneuver and to have a chance to complete this maneuver the motivation function has a value of 50 while performing the maneuver. If an obstacle is detected close enough while performing the avoidance maneuver, the result of above equation becomes higher than 50 and this is then the value of the motivation. The behavior will then interrupt itself and restart the avoidance maneuver.

10.2.3.2) Eat Food

When the prey is standing on top of food (detected by the color sensor) the motivation value of the EatFood is directly proportional with the hunger value from InternalState.
motivation_eat.png
The slope is calculated by

$\qquad \qquad \qquad \qquad \text{slope} = 0.9\cdot \text{hunger}$

Whenever the prey is not on food the hunger increases, but the motivation value remains zero. When the prey is right on top of food the motivation for EatFood is as described above. If the EatFood behavior is the dominant behavior the prey stands still, eating and the hunger decreases.

10.2.3.3) Other Behaviors

The motivation of the Wander behavior does not need neither a graph or an equation to be described. The motivation value is simply one all the time. This makes the Wander the default behavior and it is only active when all other behaviors have motivations less than 0.

The AvoidBorder, BackBumper and RunAway behaviors all have different avoidance maneuvers, but their motivation functions are similar.
When something is detected the motivation is 100 - respectively: color sensor detects border, touch sensor is pressed or IR readings are above threshold. When a prey is performing an avoidance maneuver (running) the related motivation value is 50. If nothing is detected and the behavior is not running the motivation value is 0.

motivation_general.png

10.3) Problems

During the end course project we encountered some difficulties which we somewhat solved.
Building the preys went smoothly but when the sensors and sensors was connected to the NXT, the connecter cables proved troublesome. The cables take up a lot of space and because they are very stiff it was hard to make the head turn. We solved this by tugging the cables the away in the free space under the NXT and leading all the cables from the head upwards and securing them with cable ties. We tried using short cable for the sensors on the head but this made it unable to turn because of the stiffness of the cables. The final solution is a "funny" looking robot that is able to turn its head with almost no resistance.

The color sensor was the cause of a lot of frustration especially during the final preparations. It was good at detecting different colors but it depended to much on the surrounding light. We tried reducing the distance from the sensor to the surface and it seems to give the best results being a few millimeters from the surface. We then tried to shield the sensor from background light and turning on an of the floodlight, but neither improved the readings. Finally we tried to calibrate (as previously described) the sensor each time the program was executed but after some investigation of the source code for the ColorSensor [Bibliography item colorsensor not found.] we found that the calibration methods do not influence the getColorID method, which we use to get color value.

We had difficulties figuring out how the predator was supposed to kill the preys and we went through a few possible solutions before settling on hitting the prey on the top. We ran out of sensor ports on prey and therefore used the enter button as an extra touch sensor. It could be nice to just tip the preys over to kill them, then each prey would need a way to detect if it is lying on the side. To detect this we tried the color sensor but it just read black, which could not be differentiated from some parts of the track. We tried other ways to detect this with the buttons and the ultrasonic sensor but neither could solve this. The current solution is working but it is still far from perfect since the predator has to hit the prey right above the enter button to kill it, and it is difficult being that accurate with the predator.

10.4) Improvements and Future Work

10.4.1) Color Sensor

To make this game work better, detection of the border needs to be improved. The color sensor works most of the time and to improve the color readings the raw values should be used in combination with calibration. Another solution would be to use a touch or ultrasonic sensor and having a physical wall as the border.
If we would continue to use the color sensor, the environment condition should be improved. Somehow the lighting condition would have to be controllable, either by having the track in a room with no natural light or somehow shielding the color sensor from surrounding light. Using easier distinguishable colors would also improve the color sensor readings.

10.4.2) Killing the Prey

Making the preys easier to kill would also improve the game significantly. Instead of using the enter button to kill a prey, a "real" touch sensor could be used by either moving the one currently used for the BackBumper or find a way to attach more touch sensors to one sensor port. The touch sensor mounted in the back is actually not activated that much during an average run so it could maybe be removed without effecting the preys' handling to much.

10.4.3) Controlling the Predator

Currently controlling the predator can be a bit difficult to learn, as it continues to performed the last action that was executed until it is told otherwise. Making the controlling more intuitive would make it easier for anybody to control the predator. Controlling the predator with a cellphone instead of a laptop would improve the "remote controlled car" feeling and will make the controller more mobile. If the predator is controlled by the built-in accelerometer in the phone, the user does not have to look on the controller to control the predator but can focus on the game.

10.4.4) Internal State

If we had more time, we would like to increase the influence of the InternalState on the preys' behaviors.
We could add a "fear" factor to the InternalState that would determine how skittish the prey depending on how often it has seen the predator. Other possible extension to the InternalState could be tiredness/energy level and loneliness. All these factors should influence the performed action, for example make the robot move slower when the energy level is low or it has not eaten for a long time.

10.4.5) Individual Properties

Right now all prey robots are alike with the same max speed, detection thresholds etc. To make the robots behave differently the properties of each robot could be set individually, thereby making some kind of Darwinian experiment where the fittest survive for the longest time.

10.4.6) Animal Behavior

The current prey robots are not very social because of the avoidance behaviors and the do not have way of acknowledge each other. By communication over bluetooth we could implement some sort of flock behavior, where the preys can tell each other about food resources, the position of the predator and their own position.
If the preys work together it makes them harder to kill, and on the other hand if they use this information to socialize it will make them more animal like. To make them seem more animal-like, the sounds, the movement and the appearance could be improved concerning these aspects.

10.5) Conclusion

We achieved the goals of our initial project proposal with some minor changes, but we have a lot of ideas for future improvements. The behavior based approach turned out to be very useful, not that difficult to implement and even easier to extend. The dependence of sensor readings influenced by environmental factors turned out to be a bigger problem than we initially expected, in our case the color sensor proved the most troublesome.
The final presentation of our project showed us that there is a lot of interest in this type of interactive robot game, which encourages us to continue to work on this idea or similar projects.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License