3 Emergent robot behaviour and simple data charts

One of the things that you may be starting to realise about robot programming is that you don’t always know exactly what will happen when you introduce your robot into a particular environment.

As the programmer, you may be quite confident that you know what the robot will do when presented with any particular situation, such as a particular sensor reading.

Using a sense–act control strategy, the robot determines its actions in part based on its sensor readings. And those sensor readings are themselves likely to be influenced by the behaviour of the robot as it acts in response to its sensor readings. Whilst the control strategy itself might be quite simple, the interaction between the robot and the world may be quite complex as a result of the feedback loop created by the robot changing its position in the environment as a result of a sensor reading made from that environment.

When a robot is placed into an environment, which in the real world is likely to be a dynamically changing environment in addition to the robot changing its own situation in the environment, predicting the overall behaviour of the robot may prove even more challenging.

And for a bystander observing the behaviour of the robot but with no knowledge of the control strategy in place, their perception of what the robot is doing, and maybe even the beliefs, desires and intentions they ascribe to the robot, may be far distinct from what the robot is actually programmed to do at a low level.

A good example of this can be illustrated using Braitenberg vehicles, where it is easy for an observer to want to ascribe high-level, human-style (anthropomorphised) behaviours to the robot, even if it is hardwired to perform in a particular way to a very simple stimulus (which is to say, sensory input).

In this notebook, you’ll have an opportunity to experiment with some simple Braitenberg-style vehicles in the RoboLab simulator. In order to help make sense of what the vehicles are doing, you’ll also see how we can use the simulated robot to collect some sensor datalogs that we can then visualise in order to see just what values the robot was responding to that caused it to behave as it did.

3.1 Braitenberg’s vehicles

In the course of studying the TM129 Robotics block, you will already have been introduced to Valentino Braitenberg’s ideas on the behaviour of robots emerging from the way they are wired up. The figure below shows two ways of connecting sensors to motors. In (a), the left sensor is connected to the left motor and the right sensor is connected to the right motor. In (b) these connections are reversed.

Diagrams representing Braitenberg vehicles alongside simulated robots wired up in a similar fashion. A Braitenberg vehicle and our simulated robot are very similar: they have two wheels, one each side, and two light sensors, one on the left and one on the right of the front of the robot. A pair of Braitenberg vehicles are shown, one light avoiding and one light seeking. A light-avoiding vehicle has the left-hand light sensor connected to the left-hand motor and wheel, and the right-hand light sensor connected to the right-hand motor and wheel. A light-seeking vehicle has the left-hand light sensor connected to the right-hand motor and wheel, and the right-hand light sensor connected to the left-hand wheel. The simulated robots have wiring indicating identical connections.

A ‘thought experiment’ suggests that the vehicle in figure (a) will move away from a light source.

Similarly, another thought experiment suggests that the vehicle in figure (b) will move towards a light source.

In the following activities you will test these predictions using an environment that models this set up, but uses downward-facing light sensors that take measurements from a ‘light gradient’ background, rather than forward-facing light sensors that look for a light source at ‘eye-level’ (that is, sensor-level!).

To start with, let’s load in the simulator:

from nbev3devsim.load_nbev3devwidget import roboSim, eds

%load_ext nbev3devsim
%load_ext nbtutor

3.2 Reconfiguring the robot

In order to detect different values from the light sensors on the right- and left-hand sides of the robot, we need to reconfigure the robot so that the sensors are placed further apart than they are in the default robot configuration.

In the simulator, or via the following line magic, select the Radial_grey background and tick the Pen Down checkbox.

%sim_magic -p -b Radial_grey

You may notice that the simulator’s left and right light sensors appear to be further apart than they have been previously. This has been done via a change to the robot configuration setting that is applied automatically when the Radial_grey background is loaded.

3.2.1 Manually changing the robot configuration settings

You can manually increase the spacing between the sensors by:

  • clicking the Settings button in the simulator, and then selecting the Robot config button

  • in the robot configuration settings window, scroll down to the "sensor1" parameters and change the "x" value from the default value of -20 to the new value -60

  • for "sensor2", change the "x" value from its default value of 20 the new value 60

  • click the Apply button.

If you look at the robot in the simulator, then you should notice that the two light sensors are now located nearer the sides of the robot and are no longer located close to the centreline.

3.2.2 Exploring the Radial_grey world

Run the following code cell to download the program to the simulator and then run it in the simulator. For now, don’t pay too much attention to the code: our initial focus is purely on what we can observe about the behaviour of the robot.

Observe what happens paying particularly close attention to the trajectory the robot follows.

Using the Positioning panel, create a new starting location in the simulator, changing the original Y value from 400 to the new value 600. How does the robot move this time?

%%sim_magic_preloaded -p -b Radial_grey -y 400

counter = 0

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

while ((colorLeft.reflected_light_intensity_pc > 5) 
       and (colorRight.reflected_light_intensity_pc > 5)):
    
    intensity_left = colorLeft.reflected_light_intensity_pc
    intensity_right = colorRight.reflected_light_intensity_pc
    
    if not (counter % 10):
        print(intensity_left, intensity_right)
    counter = counter + 1
    
    left_motor_speed = SpeedPercent(intensity_left)
    right_motor_speed = SpeedPercent(intensity_right)
    
    tank_drive.on(left_motor_speed, right_motor_speed)

With the robot starting just below the centreline on the radial grey background, you should notice that as it moves across the background it veers away from the light on a path that curves towards the bottom right of the simulator screen, steering to the right from the robot’s perspective.

When the robot starts above the centreline, it veers away on the left-hand side of the central bright point (that is, the robot steers to its left).

If the robot starts on the centreline then it continues on a straight path.

So how does the program work?

If you inspect it closely, you will see it is split into several parts.

The first part just clarifies the sensor configuration:

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

Then we have a while... loop that ensures the program keeps running until either the left or the right sensor value sees a particularly dark value:

while ((colorLeft.reflected_light_intensity_pc > 5) 
       and (colorLeft.reflected_light_intensity_pc > 5)):

Inside the while... block is the ‘intelligence’ of the program.

The values are displayed in the simulator output window using a print() statement. The lines:

    if not (counter % 10):
        print(intensity_left, intensity_right)
    counter = counter + 1

may confuse you at first, but they are there to limit the display of the sensor readings to every tenth iteration around the loop. The counter % 10 finds the remainder of dividing the value of the counter by 10; the expression not (counter % 10) evaluates as True every tenth iteration because the Boolean value of 0 is False, and for every other positive integer is True.

The sensor values are then used to set the motor speeds:

    left_motor_speed = SpeedPercent(intensity_left)
    right_motor_speed = SpeedPercent(intensity_right)
    
    tank_drive.on(left_motor_speed, right_motor_speed)

In this configuration:

  • the percentage-scaled left sensor value determines the speed value applied to the left motor

  • the percentage-scaled right sensor value determines the right motor speed.

The sensor value reports a higher reading the brighter the background. As the robot approaches the light source from below the centreline, the left sensor reads a higher value than the right sensor. As described by the program, the left motor thus turns more quickly than the right motor, and so the robot turns toward its right-hand side and veers away from the light source.

### 3.2.3 Crossing the wires

Now let’s see what happens if we run the following program which uses:

  • the left light sensor to control the speed of the right motor

  • the right light sensor to control the speed of the left motor.

Still using the Radial_grey background, clear the traces in the simulator.

Run the following code cell to download the program to the simulator and then run it in the simulator.

Move the robot to the starting location X=100, Y=700 and run the program again.

How does the robot’s behaviour with the ‘cross-wired’ sensors and motors compare with the ‘direct’, same-side wiring?

%%sim_magic_preloaded -b Radial_grey

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

while ((colorLeft.reflected_light_intensity_pc > 5) 
       and (colorRight.reflected_light_intensity_pc > 5)):
    
    intensity_left = colorLeft.reflected_light_intensity_pc
    intensity_right = colorRight.reflected_light_intensity_pc
    
    print(intensity_left, intensity_right)
    
    left_motor_speed = SpeedPercent(intensity_right)
    right_motor_speed = SpeedPercent(intensity_left)
    
    tank_drive.on(left_motor_speed, right_motor_speed)
 

When the program runs this time, the robot arcs towards the light: if it starts below the centreline, then the robot turns to its left and up towards the light; if it starts above the centreline, then the robot turns to its right, and curves down towards the light.

3.2.4 Explaining the robot’s behaviour

How is the robot’s behaviour explained by the program this time?

Double-click this cell to edit it and enter your explanation of why the robot behaves as it does.

Example solution

Click the arrow in the sidebar or run this cell to reveal an example solution.

The sensor values are mapped onto motor speeds with the following lines of code:

    left_motor_speed = SpeedPercent(intensity_right)
    right_motor_speed = SpeedPercent(intensity_left)
    
    tank_drive.on(left_motor_speed, right_motor_speed)

In this configuration, the percentage-scaled right sensor value determines the speed value applied to the left motor, and the percentage-scaled left sensor value sets the right motor speed.

As before, the sensor value reports a higher reading the brighter the background. As the robot approaches the light source from below the centreline, the left sensor reads a higher value than the right sensor. This results in the right-hand motor turning more quickly than the left motor. As a result, the robot turns toward its left-hand side and turns towards the light source.

3.2.5 Looking at the data

To understand a little more closely what the sensors are seeing, click the Chart toggle display button in the simulator to open the chart and then select the Left light and Right light traces.

To start with, let’s just make sure the datalog is empty:

# Clear the datalog
%sim_data --clear

The following program streams the necessary data elements to the simulator output window.

Run the program and observe the behaviour of the traces.

How do the traces differ in value?

%%sim_magic_preloaded -c -b Radial_grey

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

count = 1

while ((colorLeft.reflected_light_intensity_pc > 5) 
       and (colorLeft.reflected_light_intensity_pc > 5)):
    
    intensity_left = colorLeft.reflected_light_intensity_pc
    intensity_right = colorRight.reflected_light_intensity_pc
    
    # Sample very fifth pass of the loop
    if not (count % 5):
        print('Light_left: ' + str(intensity_left))
        print('Light_right: ' + str(intensity_right))
    count = count + 1
    
    left_motor_speed = SpeedPercent(intensity_right)
    right_motor_speed = SpeedPercent(intensity_left)
   
    tank_drive.on(left_motor_speed, right_motor_speed)

By inspection of the traces, you should notice that one of them is always slightly higher than the other.

As the robot gets closer to the light source, and the light ‘gradient’ increases, the difference between the left and right sensor readings increases and the gap between the lines increases. As the robot moves further away from the light source, the readings get closer together again.

We can also inspect the data in the notebook directly by looking at the data returned in the notebook synchronised datalog.

Run the following code cell.

# Grab the logged data into a pandas dataframe
df = %sim_data

# Preview the first few rows of the dataset
df.head()

Run the following code cell to plot the data using the seaborn scientific charting package:

import seaborn as sns

# A line plot is a sensible chart type to use
# to plot the time series data
ax = sns.lineplot(x="index",
                  y="value",
                  hue='variable',
                  data=df)

3.3 Using ultrasound

As well as a Braitenberg vehicle that uses two light sensors to activate the motor controls for the robot’s two wheels, we can also create a Braitenberg vehicle that uses a single distance sensor to moderate its behaviour, for example to try to avoid obstacles.

%sim_magic -b Obstacles_Test -u -x 120 -y 120 -a 90

3.3.1 Activity – Using ultrasound

Load in the Obstacles_Test background and run the following code cell to download the program to the simulator. Ensure that the background is loaded and that the ultrasound rays are enabled, and then run the program in the simulator.

You can enable the ultrasound sensor rays by passing the -u switch in the simulator magic or via the Obstacles pop-up in the simulator itself.

%sim_magic -u

Record your observations of the behaviour of the robot when the program is run in the simulator with the robot starting in different positions (for example, for combinations of (X, Y, Angle) of (120, 120, 90), (210, 120, 90), (500, 170, 145) and (500, 370, 75). Based on your observations, what sort of behaviour does the robot appear to be exhibiting?

Record your observations here about what the robot appears to be doing when the program is run in the simulator with the robot starting in different positions.

%%sim_magic_preloaded -b Obstacles_Test -u -x 120 -y 120 -a 90

import time
ultrasonic = UltrasonicSensor(INPUT_1)

u = ultrasonic.distance_centimeters
print('Ultrasonic: ' + str(u))

time.sleep(1)

while  u > 1:
    u = ultrasonic.distance_centimeters
    
    print('Ultrasonic: ' + str(u))
    
    speed = min(100, u)
    
    left_motor_speed = SpeedPercent(speed)
    right_motor_speed = SpeedPercent(speed)
    
    tank_drive.on(left_motor_speed, right_motor_speed)

Based solely on your observations, what sort of behaviour does the robot appear to be performing?

With reference to the program, what actions is the robot actually performing? Annotate the program with comments that describe the behaviour each step is intended to produce.

Example solution

Click the arrow in the sidebar or run this cell to reveal an example solution.

When the program is run in the simulator, the robot moves forwards but then slows down as it approaches the obstacle as if it was a bit wary of it. The robot eventually stops as it reaches the obstacle if the obstacle is directly in front of the centreline of the robot. Otherwise, the robot inches up to the obstacle, moves slowly with its wheels over it, and then accelerates away once it is clear of the obstacle.

I have commented to the program to explain how I think it works.

%%sim_magic_preloaded -b Obstacles_Test -u -x 120 -y 120 -a 90

import time
ultrasonic = UltrasonicSensor(INPUT_1)

# Wait for a moment:
# ultrasound sensors can take a moment or two to
# start working as they take soundings on the environment
time.sleep(1)

# Grab the sensor reading
# as a distance in cm
u = ultrasonic.distance_centimeters

print('Ultrasonic: ' + str(u))

# Check the distance to an obstacle
# Loop whilst the distance is greater than 1cm
# Note that from the ray trace, the sensor 
# appears to be mounted a little way in
# from the front edge of the robot.
while  u > 1:
    # Resample the ultrasonic sensor reading
    u = ultrasonic.distance_centimeters
    print('Ultrasonic: ' + str(u))
    
    # Set a speed limit to the lesser of
    # 100 and the obstacle distance in cm
    speed = min(100, u)
    
    # Set the motor speeds based on the distance
    # to the nearest obstacle
    left_motor_speed = SpeedPercent(speed)
    right_motor_speed = SpeedPercent(speed)
    tank_drive.on(left_motor_speed, right_motor_speed)
    
# The distance must be less than 1cm
# so end the program and implicitly turn the motors off

3.4 Summary

In this notebook, you have experimented with some simple Braitenberg vehicles, seeing how a reactive control strategy based on some simple sensor inputs can lead to different emergent behaviours in the robot.

In one example you used simple ‘direct connection’ between light sensors and motor speeds; in another, you explored how an ultrasound sensor could act as a ‘virtual bumper’ to identify when the robot was getting close to an obstacle.

In some cases, we might be tempted to call the behaviours that emerge from the interaction of the control strategy with the environment as ‘intelligent’, or even to ascribe certain desires to the robot (such as ‘it wants to this’). But that is not really what is happening: the robot is simply reacting to particular inputs in a particular way.