2 Runaround

In the Asimov short story ‘Runaround’ in I, Robot there is a situation where there is a conflict between the instruction given to collect selenium (Second Law: obey a human), and Speedy’s (the robot’s) strong self-preservation potential (Third Law: protect its own existence), which is endangered by the gases around the selenium pool. In this notebook you are going to see how such conflicts, which are often represented as conflicting potentials, can affect a robot’s behaviour.

Because the conflicting potentials vary from point to point in the space, we can think of these as potential maps. One way of modelling a potential map is to think of it in geographical terms, with the potential map represented as a ‘landscape’ over an area. Areas of high potential are represented by high points in the landscape; areas of low potential are represented as valleys. We can take the modelling step one step further, and represent this landscape using a topographical map, where high points are represented by bright colours, and low areas are represented by dark colours.

So load the simulator in the normal way and let’s get ready to have a run around…

from nbev3devsim.load_nbev3devwidget import roboSim, eds

%load_ext nbev3devsim

2.1 Exploring the selenium field

Download and run the following program using the Radial_red background and the Small_Robot_Wide_Eyes robot configuration to explore the environment in pen-down mode. What does the robot do?

Use the Simulator controls or simulator S keyboard shortcut to stop the downloaded program running.

%%sim_magic_preloaded -b Radial_red -pR -r Small_Robot_Wide_Eyes

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

# The GAIN term 'amplifies' the error signal
GAIN = 30

steering_drive = MoveSteering(OUTPUT_B, OUTPUT_C)
steering_drive.on(0, 20)

while ((colorLeft.reflected_light_intensity_pc>0.05) 
       and (colorRight.reflected_light_intensity_pc)>5):
    
    intensity_left = colorLeft.reflected_light_intensity_pc
    intensity_right = colorRight.reflected_light_intensity_pc
    
    # Maybe useful for debugging
    #print(intensity_left, intensity_right)
    
    # Calculate an error term that turns us towards selenium
    error = intensity_right - intensity_left
    
    # Amplify the error to generate a steering correction value
    # The min/max keeps the steering in bounds -100...100
    correction = min(max(error * GAIN, -100), 100)
    
    # Steer the robot accordingly
    steering_drive.on(correction, 20)
    

Your notes and observations on what happens when the program is executed.

The previous program is based on one of the earlier Braitenberg programs in which the robot attempts to turn towards the bright light source. This corresponds to the ‘get selenium’ instruction.

By inspection of the program, you might be wondering: what happens if we set error = intensity_left - intensity_right? Can you make a prediction about that? Is your prediction likely to be affected by the starting point and orientation of the robot? Try it and see!

But what happens if we add a further rule that causes the robot to shy away from the selenium if it gets too close?

Download and run the following program and observe what happens:

%%sim_magic_preloaded -b Radial_red -P green -pRH -r Small_Robot_Wide_Eyes

colorLeft = ColorSensor(INPUT_2)
colorRight = ColorSensor(INPUT_3)

# The GAIN term "amplifies" the error signal
GAIN = 30

steering_drive = MoveSteering(OUTPUT_B, OUTPUT_C)

while ((colorLeft.reflected_light_intensity_pc>0.05) 
       and (colorRight.reflected_light_intensity_pc)>5):
    
    intensity_left = colorLeft.reflected_light_intensity_pc
    intensity_right = colorRight.reflected_light_intensity_pc
    
    # Maybe useful for debugging
    #print(intensity_left, intensity_right)
    
    # Calculate an error term that turns us towards selenium
    error = intensity_right - intensity_left
    
    
    # If we are too close, override that rule
    # and turn the other way
    if (intensity_right>75 or intensity_left>75):
        error = -2*error
    
    
    # If we are too close to selenium, shy away
    if (colorLeft.reflected_light_intensity_pc > 65 or
        colorRight.reflected_light_intensity_pc > 65):
        error = colorLeft.reflected_light_intensity - colorRight.reflected_light_intensity
    
    
    # Amplify the error to generate a steering correction value
    # The min/max keeps the steering in bounds -100...100
    correction = min(max(error * GAIN, -100), 100)
    
    # Steer the robot accordingly
    steering_drive.on(correction, 20)

Record your observations here.

When the second program is run, the robot approaches the selenium deposit, then turns away then approaches it again. When I ran the program, it started to ‘thrash’: the robot didn’t seem to know whether to turn left or right. (You may even have started to feel sorry for it…)

What happens if you add some sensor noise using the slider as the program is running? Does the addition of some uncertainty in the sensor values help the robot make progress at all?

## 2.2 Summary

Simple programs can often lead to complex emergent behaviours. Whilst the control behaviours themselves may be simple, the way they interact with the environment, which may itself be complex, can lead to a wide range of behaviours that you might never think to predict.

In the example program in this notebook, you saw how two simple rules – one for turning towards the selenium, another, ‘higher-priority’ rule for turning the other way if you get too close – can also interact to create a complex behaviour from two simple sensor inputs.