Notebook

Below this is a collection of all the notebooks. It does not really work to just throw them all together like this, but it is usable for searching.

# Lab Notebook

Date: 02/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

## Goal

• Get familiar with leJOS both the computer and on the NXT aswell.
• Compile and upload a program.

## Plan

• Build the LEGO robot.
• Install Eclipse and leJOS.
• Install the firmware on the NXT.
• Transfer and run the LineFollower.java.
• Test the values for the colors with and without floodlight.
• Change the intervals between the color checks.
• Measure memory.
• Transfer a program via bluetooth. (optional)

## Results

### Color values

Color Value
Black 39
White 60
Light gray 53
Red 57
Dark gray 44
Green 46

The program uses the value 45 as the border between black and white. Judging from the above table, it could be considered to change it to 50, since it is the value between black and white.

Withour floodlight:

Color Value
Black 26
White 43
Light gray 38
Red 44
Dark gray 34
Green 42

Even though the values change, the difference between the colors is approximatly the same. The effect of this is only that the black-white-border should be about 35.

### Memory

Free memory: 58288

Not using the variables:
Free memory: 58352
So if the program does not have to think about variables, it has more free memory. On the other hand it would be a lot harder as a programmer, so it should not really be considered an option.

## Conclusion

In completing the assignment we got somewhat familier with the leJOS framework and how the leJOS plugin works in Eclipse.
We have learned how to compile and upload a program through the plugin.
In the beginning we had some issues with installing the right versions of eclipse and java because we used a 64-bit computer to communicate with the device.

We finished the assignment before end of class and we therefore began to figure out how to communicate with the device over bluetooth.
The bluetooth communication turned out to be a bit of a problem because of incompatibilities between the built-in bluetooth in the laptop and the device.

# Lab Notebook

Date: 09/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 4 hours

## Goal

The goal of this lab session is to get familiar an experiment with an I2C sensor, namely the ultrasonic sensor.

## Plan

• Explore the reading from the ultrasonic sensor using SonicSensorTest.java to determine under which circumstances the readings are most accurate and when it is not.
• Run and analyse the Tracker.java to see how the ultrasonic sensor can be used to control a robot.
• Using the setup from Philippe Hurbain, implement a leJOS wall follower.

## Results

### Test of the Ultrasonic Sensor

With the sensor mounted, and running the SonicSensorTest.java the measurements are done indoor and on a flat surface with no obstructions.

#### Distance

The distance measurements was done with a 15x30 cm plastic plate.
Distance is measured in centimeters from the front of the sensor.

Distance Value
0 255
0.3 5
5 5
10 14
15 18
20 23
30 32
40 43
50 50
175 176
225 (max) 226

Looking at the table, we note the following things which are important for using the ultrasonic sensor

• If the sensor is completely blocked, it can not read the ultra sonic waves and so it returns 255 (no echo).
• The lowest measurement is 5 cm, and from a few millimeters to 5 cm, the return value is 5 for a steady (non-moving) target.
• For a distance below 50 cm we get slightly bigger numbers than the actual distance
• The sensor does not react to objects before they are only 225 cm from the sensor, which is a bit shorter than the theoretical distance of 254 cm.

#### Cone

Based on the readings from the Lego mindstorms ground plane, the cone is 22.5 degrees from the middle to the left and the right (45 degrees in total).
Making the robot lie on its side to measure the vertical angles of the cone, the readings result in the same values.
Holding a pen a few millimeters from each of the 'eyes' the left eye gives the value 255 and the right eye give the value 5. Concluding from this, the left eye must be the sensor and the right eye the sender.

#### Objects

Trying to read the distance to a round transparent plastic bottle, the readings are a bit weird.
Trying to read different kinds of water bottles, the values seems to be a bit off. At distance 17 the values fluctuates between 17 and 24,
Moving the object a bit more to the left (from the sensor's view) the measurements becomes a lot more accurate. This is the case not only with round objects, but with all smaller objects, that the most accurate results are obtained when the object is in the center of the view of the sensor, or slightly to the left of the center.

#### Time limit

If there is no object within the theoretical max range (254cm), the reading should time out.
The time limit for the measurements is the time it takes for the sound wave to travel the maximum distance back and forth.

(1)
\begin{align} \text{Time limit} = 2\cdot \frac{\text{distance}}{\text{speed of sound}} = 2\cdot \frac{2.54\text{m}}{340.29 \tfrac{\text{m}}{\text{sec}}} \approx 0.015\text{s} = 15\text{ms} \end{align}

Since no object (probable) will travel faster than the speed of sound, the frequency of readings ensures that the sensor will know the object is closing.

### Tracker Beam

Using the Tracker.java (that uses the Car.java) the car drives forward until it measures an object within the desired distance (default 35cm), and then it oscillates around that value driving forward and backwards never stopping, not even if actually measured 35cm.

#### Change of Constants

• If the minimum speed (minPower) is increased the robot oscillates more.
• The gain is the importance of the distance (power = gain*error). If the gain is increased the robot brakes faster when the distance get smaller.
• The desired distance is obvious and there is no need to test this.

The control of the Tracker.java is a closed loop control seeing that it get feedback all the time. It uses proportional control for the setting of the speed.

#### Differential Control

Adding differential control to adjust the speed the robot reduced the size of the oscillations.
There were some problems with getting the speed from the motor, seeing that the API seems to be out of date.

### Wall Follower

Building a wall follower inspired by Philippe Hurbain. The robot has limited flat surfaces and therefore the mounting of a 45 degrees sensor can be a bit difficult.

The WallFollower.java is not too complicated however Philippe Hurbain has very different values for the distances.

## Conclusion

The running of the SonicSensorTest.java and the related results reveals the limits of the ultrasonic sensor regarding distance and the cone/vision. It also showed that the sensor had misreadings when an object was not centered in front of the sensor.
The Tracker.java works very well, and the implementation of the WallFollower.java is not too difficult once the problems of attaching the sensor in a 45 degree angle is overcome.
All in all both the limitations and how to control the robot from the input of the ultrasonic sensor is shown in the above.

# Lab Notebook

Date: 16/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 4 hours

## Goal

The goal of this lab session is to get familiar and experiment with the sound sensor by implementing programs with sound triggered behavior and running them on the nxt-device.

## Plan

1. Mount the sound sensor on the robot.
2. Test the sound sensor using the SoundSensorTest.java.
3. Collect a series of data using the DataLogger.java and represent it as a graph.
4. Control a car using the SoundCtrCar.java and describe the behavior.
5. Use the sensor to detect claps, noting the pattern of a clap.
6. Implement the PartyFinder.java that travels towards noise.

## Results

The sound sensor was mounted accordingly to the manual 9797 page 24-26.

### Measuring sounds

The SoundSensorTest.java was constructed from the SonicSensorTest.java but with the SoundSensor class and the readings of SensorPort.S1.readRawValue() (where the SoundSensor was connected to port 1).
Measuring the raw value the value seems to be ranging from 999 (silence) to 0 and the environment noise is between 950 and 999. Unlike the ultrasonic sensor there seems to be no problem with distance, and it can also measure sounds behind the sensor.
This test can only measure if the sensor reads anything, because, depending on measurement frequency, it is very difficult to see what number the display says. There are a few other sources of error such as in the SoundSensorTest.java the interval is 300ms. It should also be remembered that sound bounces off walls and objects.

Using the DataLogger.java that writes it's output to a sample.txt as a comma-separated file, the following graph was generated.
The graph is of soundlevel in percentage measured over time, and displays three claps executed in the Lego Lab room. Soundlevel reads is performed using the readValue method on the SoundSensor class. The range of the percentage reading given by the readValue method is not well documented in the Lejos API, so there is no easy way of knowing what the threshold values 0% and 100% corresponds to in DB.

The measurements were performed in a closed room to minimize background noise. The distance between the clap and the sound sensor was approximately 50 cm.
The three claps are easily distinguished, and the last peak - that reaches approximately 40 - could be echo or the readings of someone turning off the program.

### Controlling the car

The SoundCtrCar.java is a simple program that makes the car contronable by making loud noises. The program has a threshold of 90% as a default definition of a loud noise measured by the readValue.
When the program is run, the car has 4 states as shown in the figure. At the first loud noise detected the robot drives forward and every consecutive loud noise makes it transist to the next state. The progam is supposed to terminate the program and stop the car at any time when the escape button is pushed, but in the original program(SoundCtrCar.java) it can only be stopped when the program is executing in the outer-most loop. This behavior is corrected in The program developed by using the ButtonListener class and making a check if the escape button has been pressed in every loop.

### Clap detection

In the attempt to distinguish claps from loud background noise, the specification of Silvian Toledo [5] was used. First waiting for a low noise (below 50%) followed by a loud noise (above 90%) that falls to a low noise within 250ms. The program (ClapMeasure.java) manages to distinguishes claps from background noise, but if a sound can mimic the above definition of a clap it can easily be tricked (fx. if the robot hits an object with the microphone).

### Party Finder

The party finder robot travels towards the source of noise. An additional sound sensor is added to the robot to make it possible to measure if the source of the sound is to the left, right or in front of the robot. The robot then uses the noise sensor on the right to control the left motor and visa versa. It does this by adding the noise to a minimum power and using that as a the power of the motor opposite the source of sound, i.e. if a loud sound is detected to the left of the robot the power of the right motor is increased.
A problem occurs when one sensor measures in a different range or more accurate than the other sensor. This must be accounted for by calibrating both sensors, an attempt to achieve this can be seen in a home made calibrating method in The program developed.

## Conclusion

In completing the assignment we got familier with the sound sensor and learned how to implement sound triggered behavior.
The first part of the exercise went smoothly, but making the party finder behave correctly was quite problematic. The PartyFinder.java works well, but it becomes difficult when one sensor is more sensitive than the other, and one motor is more responsive than the other. The home made calibrating method adds a number to the power of the motor, and therefore only adjusts the sound at a specific level. To make it properly there should probably also be factor for each motor applied to the calculations.

# Lab Notebook

Date: 23/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 4 hours

## Goal

The goal of the lab session is to use the light sensor to make the robot follow a black line using different techniques and make the robot detect different colors.

## Plan

1. Mount the light sensor on the robot and make a program BWSensorDriver.java that uses the BlackWhiteSensor.java class and test the sensors light and dark detection.
2. Try out the LineFollowerCal.java line follower program.
3. Make a program ColorSensor.java based on the BlackWhiteSensor.java and make it detect three different colors: black, blue and white.
4. Use the ColorSensor.java to make a line follower program StopInGoal.java, that follows a black line and stops in a blue goal zone.
5. Try to make a smoother and faster line follower program using a PID regulator [Bibliography item pid not found.] and only one light sensor.

## Results

### Black White Detection

The light sensor was mounted accordingly to the manual 9797 page 32-34.

The way to calibrate the light sensor using the BlackWhiteSensor.java class is to first placing the robot over a black surface and pressing enter and then placing it on a white surface and pressing enter. The class uses the mean value between the values for black and white and uses this to detect light and dark.
The program BWSensorDriver.java that uses BlackWhiteSensor.java was created, and it writes the value to the display.
The program is good for detection black and white but it can not detect colors since it uses a threshold the result is always either black or white.

### Line Follower with Calibration

The LineFollowerCal.java program makes the robot follow a line quite good. The program calibrates using the BlackWhiteSensor.java and starts to follow the line as soon as it is calibrated. If it can not detect any black in its vicinity it goes around in circels.
There is very little fluctuation in the robots driving and the oscillations becomes very small. This is probably because there is no sleep between the readings as it has been the case in previous assignments.

### ColorSensor with Calibration

The BlackWhiteSensor.java was used as the basis of a program: ColorSensor.java, that calibrates black, white and blue using the raw value of the sensor. The readings of the raw values was chosen as opposed to the percentage value to get more accurate readings and to get a larger span of measurement to easier to distinguish the colors.
The program divides the area of measurement into three areas using the read raw values as thresholds, instead of the two areas in the BlackWhiteSensor.java.
To use the ColorSensor.java program the CSDriver.java was created which prints the values to the display.

The following figure illustrates the relationship between the different colors and their respective raw values and how the program divide area of measurement.

The following table is from a calibration of the three colors performed indoors in normal lighting measured on colored lego bricks on their smooth surface.

Color Raw value
White 401
Blue 583
Black 642

The program proved well to distinguish the three colors.

### Line Follower that stops in a Goal Zone

Based on the LineFollowerCal.java program the StopInGoal.java that uses the ColorSensor.java was constructed. The program uses one light sensor and makes the robot travel along the one side of the black line and stop when it gets to a blue area.
The challenge with this approach is that when the sensor measures right between black and white it will measure a value similar to blue, so the naive implementation of making the robot stop when the light sensor reads blue does not work. To overcome this problem the the light sensor has to read a blue value 20 times in a row. Since the time between each reading is 10ms the light sensor has to read blue 200ms in a row. There is of course still a small chance that this is fulfilled before the robot reaches the blue zone, if the robot would drive completely straight where the light sensor reads exactly between black and white (=blue), but the program has not proven to be good enough for this to happen.

...
final int consMeas = 20;
...
while (! Button.ESCAPE.isPressed()) {
LCD.drawInt(sensor.light(),4,10,2);
LCD.refresh();

if ( sensor.black() ) {
Car.forward(power, 0);
count = 0;
} else if ( sensor.white() ) {
Car.forward(0, power);
count = 0;
} else if ( sensor.blue() ) {
count++;
if (count > consMeas) {
count = 0;
break;
}
}
}
...


The value 20 (consMeas) can seem a bit magical, and maybe it is. The value 5 was too small, and with the value 50 the robot drove quite far into the blue zone, and 20 is just a choice between those two scenarios, but 30 would probably be a good value as well.
This implementation works nice on a straight line, but it does not perform well on a narrow or curved line.

As it can be seen in the above video the robot does not stop until the light sensor has read several blue values.

### PID Line Follower

The description and pseudo code from A PID Controller For Lego Mindstorms Robots [Bibliography item pid not found.] was used to make a PID regulated line follower program: PIDLineFollower.java. The program uses a modifed version of the BlackWhiteSensor.java to read and calibrate the sensor values: BWSensorModified.java.
After implementing PIDLineFollower.java there were some problems with the constants (Kp, Ki, Kd) and the robot turned around in circels and the turn value increased to well over 10000. After following the approach int the "Tuning A PID Controller Without Complex Math" [Bibliography item pid not found.] section, the Kp value was sat to a lower value. This made the robot follow the line but oscillate quite a bit.

After this we keept on tweaking the Kp value until we got a satisfiable result. Some of the other terms needed a lot of tweaking as well but we ran out of time at the lab session.

The result behavior of the robot can be seen in the following video.

## Conclusion

During the assignment we succeded in making a robot with a mounted light sensor detect different colors and in making the robot follow a black line.
Completing the assignments 1 to 3 in the plan went well and was completed without any major glitches. Excercise 4, where we were supposed to make a wall following robot that stops when entering a blue area, was a bit more challenging. We choose to implement the program using only one sensor rather than the more trivial (in theory) implementation with two sensors, because we wanted to see if it could be done. It turned out that it could be done fairly easy. Our idea was to count measures where the robot measured in the blue range and ensure that the robot was in the blue area for some time to elliminate the short readings of blue between reading black and white. We implemented this idea with another if else clause that checks if the read value is in the range of the blue color.
This approach turned out to work fine, as soon as we determined the number of measures needed to be counted to ensure that the robot was in the blue area.

The last excercise proved a bit more troublesome. We had som problems in the beginning with the implementation of the approach descriped in [Bibliography item pid not found.] but after we followed the term tweaking part of it we got the robot to follow a line. We ran out of time before we got to tweak all the termes correctly so the behavior of the robot wasn't that pretty. Besides we only made the robot follow a straight line, if we have had more time or perhaps had begun tweaking the different terms earlier maybe we could have gotten a better result.

## References

[[bibliography title=""]]

lesson
The lesson description - http://legolab.daimi.au.dk/DigitalControl.dir/NXT/Lesson4.dir/Lesson.html
bwsensor
BlackWhiteSensor.java - http://legolab.daimi.au.dk/DigitalControl.dir/NXT/Lesson4.dir/BlackWhiteSensor.java
linefollower
LineFollowerCal.java - http://legolab.daimi.au.dk/DigitalControl.dir/NXT/Lesson4.dir/LineFollowerCal.java
car
Car.java - http://legolab.daimi.au.dk/DigitalControl.dir/NXT/src/Car.java
pid
A PID Controller For Lego Mindstorms Robots - http://www.inpharmix.com/jps/PID_Controller_For_Lego_Mindstorms_Robots.html

[[/bibliography]]

# Lab Notebook

Date: 01/03 2012
Group members participating: Falk and Jakob
Duration of activity: 3 hours

## Goal

The goal is to build a self balancing robot inspired by the "NXTway" by Hurbain [Bibliography item Hurbain not found.] and the Legway by Hassenplug [Bibliography item Hassenplug not found.]. Experiment with communication between NXT and a computer via bluetooth.

## Plan

1. Built the NXTway robot
2. Connect to the robot by bluetooth and transfer a program
3. Run the NXTway programm

## Results

### Building the NXTway robot

The robot was constructed according to the pictures in [Bibliography item Hassenplug not found.] (with just one minor variation because of the big battery pack).

### Bluetooth

Following the guidelines [Bibliography item NXJ not found.] connecting to the robot and transfering a program just worked (it just took some more time compared to USB connection).

### NXTway

Compared to the original code [Bibliography item Bagnall not found.] some changes were done in our Sejway.java

1. First, the way of accessing the motors was changed to make it use the MotorPort.
2. Next, the PID constants were set to be float instead of int.
3. Button listeners for the right and left button were used to adjust the offset at running time. On the LCD offset and current error are shown to make it easier to find the right offset value.
4. Eliminate the int_error from the computation by setting KI to 0
5. Setting KI = 4 again and introducing the alpha-calculation as described in [Bibliography item situated not found.].
• int_error = ((int_error + error) * 2)/3; changed to: int_error = (1-alpha)*int_error + alpha*error;
6. Tweak the constants to improve the robot's ability to hold it's balance

#### Results

1. Without changing any constants the NXTway was falling backwards. It is very important to set it to have the right balance on startup.
2. The higher accuracy in the computation did not noticably change the behaviour of the robot. It was still dependent on a exactly balanced start position and tends to fall backwards.
3. Beeing able to adjust the offset eliminates the human error at the setup. The NXJway is able to balance for a few seconds.
4. Setting KI = 0 made the robot a little more stable. It seems like the int_error is not that important for the NXTway
5. Trying alpha values of 0.1 and 0.3 the robot still fell.
6. The desired result was to make the robot accelerate faster and by that increase it's stability.
• Increasing KP to make the robot accelerate faster did not work. The robot just oscillated more.
• Increasing KD made the NXTway more unstable.

## Conclusion

When the robot falls a bit to one side it can regain its balance, but once it falls too much it can not get back from this and falls.
A possible explanation for the NXJway's tendency to fall backwards could be the heavy battery pack used. The robot can not accelerate fast enough to compensate for the battery pack and so once it starts falling there is no way back.
Changing the constants made the balance a little better, but the robot always fell after a short time.
Adding arms to to the robot to compensate the weight of the battery did not help much.
If there were more time further experiments could have been done with changing the wheels and trying to change the robot's point of balance.

## References

[[bibliography title=""]]

Hassenplug
Steve Hassenplug, Legway
NXJ
Bluetooth Instructions
Hurbain
Philippe Hurbain, NXTway
Bagnall
Brian Bagnall Sejway sourcecode
Sejway
modified Sejway.java
situated
Using Situated Communication in Distributed Autonomous Mobile Robots

[[/bibliography]]

# Lab Notebook

Date: 08/03 2012
Group members participating: Falk and Jakob
Duration of activity: 4 hours

## Plan

1. Get a RCX light sensor to work with the NXT.
• This is for the future 'Alishan train track' race.
2. Build a robot suitable for the Braitenberg vehicles.
3. Implement a simple solution for vehicle 1 using the readValue (i.e. not the raw value).
4. Implement a calculator class that maps the raw values from the sensors to a meaningful power value for the motors, and use this for vehicle 1.
5. Implement vehicle 2a and 2b, and experiment with other robots with lights on top of them.
6. Implement vehicle 3.

## Results

### RCX Light Sensor

Using the converter cable from the RCX light sensor to the NXT sensor port, the following code works:

RCXLightSensor sensor = new RCXLightSensor(SensorPort.S1);
sensor.activate();


The activate() method should be used even though it is deprecated.

### Build the Robot

The robot has been built accordingly to the manual 9797 page 8-22.

### Vehicle 1

A light sensor has been added to the robot pointing approximately 45 degrees upwards in front of the robot.

Developing the simple and obvious implementation resulted in the Braitenberg1.java.

The main loop looks like this:

...
while (running) {
int light = sensor.getLightValue();
Car.forward(light, light);
LCD.drawString("Light:  " + light, 0, 0);
}
...


The robot drives a bit slow (power approx 60). Seeing that it is almost impossible to have a light reading above 85 unless a flashlight is used, adding a constant to the power value of about 15 to compensate friction in the system makes the robot drive at a decent pace and does only stop when it actually is dark around it. Another way to compensate for the friction is to multiply the light value by a number (f.i. 1.5).

### Calculator

A calculator class SensorConvertor.java was developed to map an arbitrary sensor reading to the power value. I does this by having variable minimum and maximum values that adapts to the extreme values.
The main calculation is this:

public int normalize(int value) {
...
return (int) (100 * (0.0 + value - min) / diff);
}


As an extra feature it uses a home made GraphMaker.java to draw the values to the LCD display in a graph.

The code from Braitenberg1.java needed some minor modifications to run with this, such as using the readRawValue(), to have more accurate readings, and converting this with the normalize() call turned in to a new Braitenberg1.java program.

### Vehicle 2a and 2b

Using the new Braitenberg1.java and adding the another sensor in the code, the main loop of Braitenberg2a.java looks like this:

...
while (running) {
int leftLight = 1023 - SensorPort.S2.readRawValue();
int rightLight = 1023 - SensorPort.S1.readRawValue();
int lp = sc.normalize(leftLight);
int rp = sc.normalize(rightLight);
Car.forward(lp, rp);
}
...


To make the program for vehicle 2b the line Car.forward(lp, rp) should be change to Car.forward(rp, lp)

### Vehicle 3

There was not enough time at the lab session to make vehicle 3.

## Conclusion

Braitenbers vehicle 1, 2a and 2b was successfully constructed.
A subgoal for this exercise was to get the robots to interact with each other, but with the differences in the construction of the robots (f.i. the height of light and sensors) as well as the programmed sensitivity of the sensors, this became quite difficult.

## References

[[bibliography title=""]]

braitenberg
Braitenberg, V. 1984. Vehicles, Experiments in Synthetic Psychology London, Cambridge: The MIT Press.
dean1
Tom Dean, Introduction to Machina Speculatrix and Braitenberg Vehicles
dean2
Tom Dean, Notes on construction of Braitenberg's Vehicles, Chapter 1-5 of Braitenbergs book

[[/bibliography]]

# Lab Notebook

Date: 29/03 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

## Goal

Analyze the SoundCar.java.

## Plan

2. Try the program
3. Analyze the three classes: RandomDrive, AvoidFront, PlaySound.
4. How is SuppressCount used to implement the suppression in Behavior.
5. Implement "drive towards Light

## Results

The robot has been built accordingly to the manual 9797 page 28-30

### Run the program

The SoundCar.java was transferred to the robot and run and the following observations was made:
The program seems to make the robot drive in small intervals in random directions - but always forward. The intervals also seems to be random length, and the robot makes some random sounds aswell.
If the ultrasonic sensor detects an obstacle within 20 cm (approx) the robot backs off and turns left.
When an event triggers the robot shows this on the LCD

• The motors are either s(stop) or f(forward).
• The avoid shows the measurement from the ultrasonic sensor and when the robot avoids something it shows b(back), f(forward) and s(stop).
• The sound displays s(sound) when it plays a sound.

In addition there is a column that shows '1' when the behavior is suppressed (0 otherwise).
By observing the suppression the sounds seems to have highest priority, and second avoid and drive is last.

#### RandomDrive

The RandomDrive is triggered by driving with a random amount of power for a random amount of time and standing still a random amount of time. All the 'random amounts' are on the form constant1+constant2*random. The effect of this is that the interval is no longer between 0 and (constant1+constant2, but are between constant1 and constant1+constant2. This is a great advantage e.g if the power was sat to 10 the robot would not move, so by using this form a triggered behavior will always have an effect (unless it is suppressed).

#### AvoidFront

The AvoidFront is triggered by the readings from the ultrasonic sensor, so as long as the distance is larger than a threshold the thread is in a busy-wait loop. Once the distance is lower than the threshold it suppresses lower priority behaviors, drives backwards, turns left and stops.

#### PlaySounds

The PlaySounds is triggered by a fixed time delay. Every 10 seconds it suppresses lower priority behaviors, plays a random amount of notes (between 5 and 25).

### Bahavior

The Behavior.java extends Thread and then calls the this.setDaemon(true). This makes sure that ones the main thread exits, all daemons will be terminated aswell, which makes the handling of threads a bit easier.

The suppressCount is incremented every time a behavior i suppressed by a higher priority behavior and decremented each time it's released.
It works some what like a semaphor and the behavior is suppressed if the suppressCount is higher than 0 and not suppressed otherwise.
This, rather than a boolen, makes sure that a behavior that is suppressed more than once has to be released the same amount of times before it can behave freely again.
If the behavior is suppressed it's blocked from calling the car's forward and backward method with a simple "if (! isSuppressed())" check.
The use of suppressCount is illustrated in the picture to the right, where the suppressCount is represented as the red numbers.

Fred Martin's [Bibliography item martin not found.] arbiter holds an array named "process_enable", that works like the suppressCount. The array holds an integer for every process; 0 if the process is deactivated and the priority of the process if it's active. It's not possible to determine whether a process have been blocked more than once, but as all the blocking (scheduling) is done by a central unit, the arbiter, this is not a problem.

### Drive towards light

The DriveTowardsLight.java class was implemented and the corresponding lines were inserted in to the modified SoundCar. The DriveTowardsLight.java uses a busy-wait loop which is triggered by light reading above the threshold.

while (leftLight < lightThreshold && rightLight < lightThreshold) {
...
}


The priority was sat to be higher than RandomDrive and less than AvoidFront. This makes the overall behavior of the robot to be driving around until it measures a light value above the threshold. Of course it still have precedens to avoid to drive int to something and to play sounds.

## Conclusion

The functionality of the prioritized suppression was explored and a new behavior was developed.

## References

[[bibliography title=""]]

braitenberg
Braitenberg, V. 1984. Vehicles, Experiments in Synthetic Psychology London, Cambridge: The MIT Press.
dean
Tom Dean, Notes on construction of Braitenberg's Vehicles, Chapter 1-5 of Braitenbergs book
brooks
Rodney Brooks, A robust layered control system for a mobile robot, IEEE Journal of Robotics and Automation, RA-2(1):14-23, 1986, also MIT AI Memo 864, September 1985.
martin
Fred G. Martin, Robotic Explorations: A Hands-on Introduction to Engineering, Prentice Hall, 2001.

[[/bibliography]]

# Lab Notebook

Date: 12/04 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 4 hours

## Goal

Build a robot that follows the Alishan train track and wins the Robot Race.

• The car must start from the start area. No part of the car is allowed to exceed the start area.
• A push on ENTER starts the car and the car should then follow the track to the retrace area, the top platform. From the retrace area the car should drive back until the car drives into the start area again. When the car is in the strart area again it should stop and time elapsed since start should be shown in the LCD. The car should be completely inside the start area before it stops.
• When on the top the car should be completely inside the retrace area before going back.

## Plan

Instead of the standard car a customised robot car was built. The plan was to make it fast and robust.
The robot uses two light sensors in the front pointing to the ground.
The program is designed as a state machine where every part of the track corresponds to a state.

### The state machine

The robot uses two light sensors in the front pointing to the ground. It is designed as a state machine

1. Drive straight ahead out of the green zone (the robot has to be aligned properly)
2. Follow the black line until both sensors see black
3. Turn right
4. Follow the line until both sensors see black
5. Turn left
6. Follow the line until both sensors see black
7. Drive straight ahead for some time (to cross the black line)
8. Do a 180° turn
9. Drive straight ahead for some time (to cross the black line)
10. Follow the line until both sensors see black
11. Turn right
12. Follow the line until both sensors see black
13. Turn left
14. Follow the line until both sensors see green
15. Drive straight ahead for some time and stop

## Results

• Before driving, the two light sensors are calibrated. During calibration for each of the to sensors a black-white and a green-white threshold is computed.
• A simple line follower using two sensors was implementedthreshold = (black + white)/2 + (white - black)/4
• To turn the robot by a certain angle (i.e. 180° to turn it around on the top platform) the method Motor.X.rotate was used. It only works if the TachoCount is reset beforehand.
• In the same way, the robot is set to drive a certain distance to come out of the green start zone.
• On the platform the robot failed to turn and follow the line. The reason was that the measurements of the sensors are different at the slope and the platform, so the threshold values computed before do not work. To solve this problem the thresholds have to be adjusted. With $\text{threshold} = \frac{\text{black} + \text{white}}{2} + \frac{\text{white} - \text{black}}{4}$ the robot kept following the line.
• Another problem was, that if the robot was driving to fast towards the curve, it moves past the two black lines and does not turn. To solve this, the robot was set to slow down after the distance of the slope (using Motor.X.getTachoCount to estimate the distance driven)

## Results

Since there were that many problems with this approach and with all the different states and substates the program became very confusing [Bibliography item source not found.]. Especially detecting the curves with the two-sensor approach was a real problem.
The notebook continues with a new approach here.

## References

[[bibliography title=""]]

rules
Rules of the Robot Race http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson8.dir/Lesson.html
source
Source Code of LineFollower.java

[[/bibliography]]

# Lab Notebook

Date: 17/04 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 7 hours

## Goal

As in the first try the goal is still to solve the Alishan train track, but with a different, simpler, approach.

## Plan

Since the state machine got out of hands the new program should be simpler with less states.
It is based on the line follower from the 4th session. The robot is still using two light sensors but only one of them is used to follow the line. The other one is used to detect the different sections of the track.

There are 8 states now:

1. From the start zone the left sensor follows the left edge of the line
2. If the right sensor detects the black line at the first curve the sensors switch roles, now the right sensor follows the right edge of the line and the robot turns right
3. At the next curve the robot turn left and the sensors switch roles again, so the left one follows the left edge of the line
4. If the right sensor detects the border of the top zone a turning move is triggered
5. On the way down out of the top zone the left sensor follows the left edge of the line
6. At the curve the sensors switch roles, now the right sensor follows the right edge of the line
7. At the last curve the switch again, the left sensor follows the left edge of the line
8. If the right sensor detects green the car stops in the goal zone

## Results

• The new approach works better than then old one makes it easier to test the different parts of the programm and find bugs.
• Additional lights helped to make the sensors more independent from the surrounding light and driving direction (up/down matters, too)
• On the way down the robot was supposed to use the same states as on the way up, but there are a few adjustments needed. Because of the slope the motor power on the way down has to be reduced.
• On the way back the turning of the robot around the curves has to be adjusted, too.
• There are gaps on the tracks that are detected on the way down, because the sensor is close to the track. They are hard to compensate, because they are not detected every time.
• The status of the battery has an influence on the motors and hence changes the turning in curves and at the top of the course.

## Conclusion

The line follower works and after countless tries the robot almost made the track.These two videos show the result:

Run of the robot, where it almost reaches the goal zone.

Here the robot reaches the green zone but needs a little help on the way there.

The source code can be found in the references [Bibliography item source not found.].

## References

[[bibliography title=""]]

firstTry
First approach with a more complicated state machine
rules
Rules of the Robot Race http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson8.dir/Lesson.html
source
Source Code of Race2.java

[[/bibliography]]

# Lab Notebook

Date: 19/04 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

## Goal

Investigate how a robot can move around in a cartesian coordinate system, keeping track of its position and direction.

## Plan

• Build a robot.
• Implement a program that makes the robot move according to a cartesian coordinate system, like Bagnall's 'Blightbot [Bibliography item bagnall not found.].
• Test the influence of rotation speed, wheel diameter and track width.
• Describe how to navigate while avoiding objects.
• Compare and consider the improved navigation.

## Results

### The robot

The robot was build accordingly to the manual 9797 page 8-22.

### The program

Before starting programming the following ideas were considered:

• The DifferentialPilot should be used.
• Wheel diameter: 5.6 cm
• Track width: 11.1 cm (middle of the wheels)
• The program uses the DifferentialPilot with the OdometryPostProvider attached to keep track of the pose of the robot.
private static DifferentialPilot pilot = new DifferentialPilot(5.6, 11.1, Motor.C, Motor.B);
private static OdometryPoseProvider poseProvider = new OdometryPoseProvider(pilot);

• The main structure of the program is similar to the main structure of Bagnall's program, at least with the goTo method.
...
goTo(200,0);
goTo(100,100);
goTo(100,-50);
goTo(0,0);
...


The first implementation used the method angleTo which caused the robot to turn wrong at certain points, even though the position of the robot was correct the calculations of the heading was incorrect.
The reason for this is that angleTo returns the angle relative to the x-axis, and not to the pose. Instead using the relativeBearing solved the problem.

public static void goTo(float x, float y) {
Pose currentPose = poseProvider.getPose();
float heading = currentPose.relativeBearing(new Point(x, y));
float distance = currentPose.distanceTo(new Point(x, y));
...
}


A whiteboard marker was attached at the front of the robot, and it made the following pattern:

As can be seen this is a bit off compared to [Bibliography item bagnall not found.], but another run mad this a lot better:

The second picture shows the robot path is close to completely on the desired track.

### Tests

To test if the rotation speed has an impact on the accuracy the robot was run with the following rotation speeds. (only one run for each value)
Wheel diameter: 5.6cm

Rotation speed Distance from the goal (cm)
40 54
80 49
160 48
320 44

Surprisingly enough the robot gets closer to the goal as the rotation speed increases, but this is based on very few readings.
The (insecure) conclusion is however that the higher rotation speed the lower the drift.

The robot was also tested with different wheel diameter.
Rotation speed: 320

Wheel diameter (cm) Distance from the goal (cm)
5.7 29
5.4 140
5.0 88

The wheel diameter obviously has a huge impact on the accuracy of the robot.

The best run was with the rotation speed at 320 and with a wheel diameter of 5.7 and not 5.6 as the original measurement.

There was not enough time at the lab session to test the influence of the track width.

### Avoiding

To avoid objects while navigating the non-blocking travel-method is needed, to be able to detect objects and possible act on this while driving. To avoid something the robot must be stopped. The pilot remembers where it is at, and if the avoiding move is done also using the pilot, it will still remember the pose.

There was not enough time at the lab session to consider the improved navigation.

## Conclusion

With a relative simple implementation using the DifferentialPilot with the OdometryPostProvider it was possible to have the robot drive around in a coordinate system and with good results. To improve the accuracy P-regulated motors with the tacho-counter could have been used.

If this approach is used for driving more than a few meters, it could easily pay off to investigate further into the optimal values for rotation speed, wheel diameter and track width. The other options the DifferentialPilot offers should also be considered such as travel speed and acceleration.

## References

[[bibliography title=""]]

bagnall
Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12, Localization, p.297 - p.298.
robotics
Java Robotics Tutorials, Enabling Your Robot to Keep Track of its Position. You could also look into Programming Your Robot to Navigate to see how an alternative to the leJOS classes could be implemented.
mataric
Maja J Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots, in IEEE Transactions on Robotics and Automation, 8(3), Jun 1992, 304-312.
lejos
leJOS Tutorial: Controlling Wheeled Vehicles
hellstrom
Thomas Hellstrom, Foreward Kinematics for the Khepera Robot

[[/bibliography]]

# Lab Notebook

Date: 02/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

## Goal

Ivestigate how a behavior-based architecture can be implemented and look at lejos.subsumption.Behavior and lejos.subsumption.Arbitrator.

## Plan

• Build a robot and add a sonic- and a touch-sensor.
• Make the BumperCar program run on the NXT.
• Investigate the behavior of the BumperCar program.
• Implement a behavior called BumperCar program.
• Investigate the source code of the Arbitrator.
• Implement a solution that reads the ultrasonic sensor continuously.
• Implement HitWall so it moves backwards for 1 second before turning.
• Implement HitWall so it can be interrupted.
• Change the program so it uses motivational values.

## Results

### The robot

The robot was build accordingly to the manual 9797 page 8-22. Ultrasonic sensor added according to page 28-29, and the touch sensor is added according to page 36.

### Behavior of BumperCar.java

When the touch sensor is pushed once, it just cotinues going forward. If the touch sensor is pressed for longer time the robot backs off and turns a little to the left. For the ultrasonic sensor the behavior is the same when it detects an obstacle.
When the isPressed action is true it's start going backwards (HitWall), but as soon as it's false it continues normal operations (DriveForward).

### Implementing the Exit behavior

When pressing the escape button shortly while the robot is executing DriveForward, the robots stops momentarily and then continue to drive forward. When the escape buttons is hold for a duration during DriveForward the program exits. Pressing the escape button, for any duration, while the robot is executing the HitWall results in the program exiting .
It could be because the Exit behavior doesn't execute System.exit(0) before the control is handed back to the (DriveForward). And exits right away when it's executing the Hitwall because the ultrasonic sensor takes ~ 20 msec to measure distance and therefore the Hitwall takes back control after 20 msec, which means that Exit has time to execute System.exit(0).

We solved this with a variable called pressed, that is set to true when Exit gets control, and control is not given back as long as pressed is true. Thereby simulating holding the escape button.

...
@Override
public boolean takeControl() {
if (button.isPressed()) {
pressed = true;
}
return pressed;
}
...


### Investigating the Arbitrator

We tested if the Arbitrator calls takeControl in the different beahaviors, by making a counter and printing how many times takeControl is called. It seems that the lejos Arbitrator doesn't call takeControl. It's because once the Arbitrator reaches a true value from takeControl on a behavior, starting from the max priority, there is no reason for it to continue and it executes the given behavior.

### Continuously read of the ultrasonic sensor

This is achieved by running a local tread inside the HitWall behavior, which update the distance value every 20 msec.

...
sonar = new UltrasonicSensor(port);
}

public void run() {
while (true) {
HitWall.distance = sonar.getDistance();
LCD.drawString("distance: " + distance + "  ", 0, 4);
}
}
...


### Implementing changes in HitWall

Making the robot go backwards for 1 second before turning is achieved with Thread.sleep(1000). We didn't manage to make the HitWall interrupt itself, so we moved on to "implementing" motivational values.

### Change the program so it uses motivational values

The classes provided in the lesson (Arbitrator, Behavior and BumperCar) is used to achieve this, because of shortage of time.
The provided code works as expected, and the Hitwall/DetectWall is able to interrupt itself by using the motivational values.

## Conclusion

Behavior based design is an easy way to program a robot with different behaviors, and it proved easy to add a new behavior with a higher priority.

As seen in the lab-session there can be issues regarding a behavior loosing control before it's done executing all it's operations, and it seems that the behavior based design of lejos have some shortcomings where there is some to gain by using motivational values instead.

## References

[[bibliography title=""]]

lejos
The leJOS Tutorial, Behavior Programming
krink
Thiemo Krink(in prep.). Motivation Networks - A Biological Model for Autonomous Agent Control.

[[/bibliography]]

(Shortcut to Notebook 11)

# 1) Choosing the Project

Date: 02/02 2012
Group members participating: Falk, Jeppe and Jakob
Duration of activity: 3 hours

## 1.1) Goal

The goal of today is to choose and describe the end course project.

## 1.2) Plan

• Find and describe three projects.
• Select one of the three projects as the end course project.
• Make a plan for the end course project.

## 1.3) Results

### 1.3.1) First Project Proposal - Animal Behavior/Robot Game

Description:
A predator-prey setting with one predator robot and 5-8 prey robots. The predator is remotely controlled, and the preys drive around autonomously.
The predator is be big and slow and the preys are small and fast.
The preys search for food while avoiding each other, other objects and the predator. When the predator approaches they scream and scatter.
If one prey hears another prey screaming, it panics. Different sounds indicate different behaviors.
The predator is able to kill a prey robot by hitting it on the top.

Hardware/Software Requirements:
Each robot have to be build around a NXT, so around 7 NXTs are needed.
The prey have two touch sensors, a microphone and a color sensor, and two motors for driving.
The predator needs no sensors as it is remote controlled, but two motors for driving and one for hitting the preys.
The predator is remotely controlled through bluetooth by a pc or a smart phone.

Software Architecture:
The preys use a behavior-based architecture for controlling their actions, and all processing would be done on the NXTs.

Challenges:
Make the prey move autonomously and making a proper avoiding behavior could be a challenge.
It could be tricky to make the preys navigate using only the touch sensors. A solution to this could be to make some sort of inner map of the environment made with tachocount.
Detecting sounds of different frequencies can be a problem.

Presentation at the End:
A presentation would be showing someone playing the game, i.e. controlling the predator and hunting the preys until all prey are dead.

### 1.3.2) Second Project Proposal - The Lost Robots

Collaborating, map building, communication.

Description:
Three robots with different skills collaborate in solving a task. The task can be finding different colored objects in an environment and returning them to some spot in a certain sequence.
One robot has the task of mapping the environment and locating the objects, another robot grabs the objects and give them to the third robot which transports them to the goal zone.

Hardware/Software Requirements:
Three NXTs are needed, one for each robot, each equipt with motors for driving and a ultrasonic sensor for avoiding objects. A color sensor is needed to detect objects, a touch sensor and a motor for grabbing and finally a touch sensor on the transport robot.

Software Architecture:
All processing would be done in a behavior-based architecure on and between the NXTs. The NXTs communicate using bluetooth.

Challenges:
The issues are communication between the robots and mapping an environment correctly.
The accuracy of the map could be improved if a controlled environment with some sort of grid is used.

Presentation at the End:
Showing the three robots solving a task by collaborating.

### 1.3.3) Third Project Proposal - The Driverless Car

Description:
A driverless car drives around a city (of lego roads) and maps the environment and takes pictures of the whole city, and matches the pictures to related spots (mini google streetview car).

Hardware/Software Requirements:
A light sensor and an ultrasonic sensor for navigation. Using the tacho-counter to help mapping and a smart phone for taking pictures.

Software Architecture:
The robot would be controlled with a behavior-based architecture and communicates with a pc via bluetooth.

Challenges:
The biggest challenge here is mapping an environment correctly using the tacho-counter.

Presentation at the End:
The robot drives around the track, making the map and sending pictures and the map to the pc.

## 1.4) Conclusion

We ended up choosing the first proposal, because it sounds fun and it has a lot of challanges. The project is easily extendable if we have a lot of time, e.g. we could make the predator move autonomously or making the preys move slower if they haven't eaten for a long time or similar more advanced behavior.

### 1.4.1) The Overall Plan

• Experiment with the sensors.
• Designing the architecture of the prey robots.
• Start building and testing one prey robot.
• Design and build the environment.
• Make a prey robot move around in the environment.
• Make all the prey robots move around and interact with each other in the environment.
• Build the predator and implement remote control.
• Make the preys interact with the predator.

## References

[[bibliography title=""]]

[[/bibliography]]