MAE 4190 Fast Robots

Albert Sun · (607) 229-7845 · ays48@cornell.edu

I'm Albert, an undergraduate studying ECE at Cornell. This is my website for MAE 4190, Fast Robots. My experience with robotics consists of working with agricultural robots in a research lab and my project team, where we are working on a robotic sailboat. I am excited to learn more about the field of robotics and how to apply it to real-world problems.


Lab 1A

In this lab, we will be setting up our Artemis Nano board to be able to communicate with the board through BLE. This will set us up for future labs to be able to communicate with the robot through our computer.

Prelab

We first download the Arduino IDE along with the Sparkful Apollo3 board manager. This allows us to program our Artemis board!

Arduino Logo

Blink

Blink an LED onboard the Artemis board!

Serial

We then check the serial communication between our board and the serial monitor. We can see messages being echoed back when we communnicate through the serial monitor. This allows us to print debug statements as well as read critical information of the board.

Read Temperature Sensor

We can read the temperature sensor onboard the board by performing an analog read. Running the example, we get the following output:

The temperature sensor is connected to an onboard ADC which allows it to convert analog readings to digital values. This is then printed to the serial monitor for us to read.

Microphone Output

By playing keys on a piano, we can change the maximum frequency read by the onboard microphone. We can see the output change as we play different notes. This illustrates the pulse density microphone on the Artemis board which converts the analog signal to binary with an FFT computation.


Lab 1B

In part B of this lab, we configure the BLE communication between our Artemis board and our computer. This allows us to send commands and receive data through Bluetooth.

Prelab

Before running commands, we need to setup our environment to be able to connect to the Artemis. We start by installing a virtual environment for Python with with the following commands:

python3 -m pip install --user virtualenv
python3 -m venv FastRobots_ble

Launching our virtual environment gives us the following:

venv

We then connect our Artemis to our computer through BLE. The below shows the advertised MAC address of the Artemis. After getting the MAC address, we can connect to the Artemis after configuring the UIUDs of the Artemis to match on the computer and board. We set all of the connection information in the jupyter lab notebook.

ble
ble

Once all the proper configurations are made, we can finally connect to the Artemis board!

ble

Task 1

Our first task is to simply send a string from our computer to the Artemis by using the ECHO command. On the Arduino side, we see the code below. We simple extract the string and append the extra characters to the end before sending it back to the computer.

ECHO

When sending an ECHO request in jupyter lab, we see the return string as following:

ECHO

Task 2

We implement the SEND_THREE_FLOATS command to extract three floats in the Arduino sketch. The Arduino code shown below extracts each float in a list of 3 that are separated by the "|" character. They are then stored in variables and printed to the serial monitor.

send three floats

When running the command in jupyter lab, we see the following output on the Arduino side. The floats are extracted and printed to the serial monitor.

Task 3

To easier debug in the future, we need to implement a way to receive timing information from the board. We implement the GET_TIME_MILLIS command to return the current time in milliseconds tracked by the Artemis board. This is sent through BLE to our computer.

get time millis

The above shows the code on the Arduino side. We create a temporary buffer to store the string of the current time in milliseconds. We then send this string through BLE by writing to the characteristic. The output on the computer side is shown below:

get time millis

Task 4

After implementing the GET_TIME_MILLIS command, we implement a notification handler. This allows our computer to automatically parse the data sent from the Artemis board. We implement a simple notification handler that prints timing data with the code shown below:

get time millis

In the above, we see that when sending the GET_TIME_MILLIS command, we can see the timestamp is printed right when the board sends back the data.

Task 5

To determine how fast timestamp messages can be sent, we will implement a loop that continuously sends timestamps in Arduino to our computer.

loop

We read this output in jupyter lab. By reading the amount of sent timestamps for 5 seconds of messages, we can calculate the amount of messages sent per second. We can see that the timestamps are sent at a rate of 44.8 messages per second.

loop

Task 6

Here, we implement a command that will generate an array of timestamps before sending the entire array. We do this by looping over a predefined array size and populating the array with the proper timestamps. The datapoints are then looped over and sent to the computer.


loop

On the python side, we implement a notification handler that will store each datapoint in an array. We can find that the amount of data points sent matches the amount of datapoints on the Arduino side meaning all data is sent correctly.

loop

Task 7

We also want to receive temperature readings from the board. Here we implement a command that sends an array of temperature readings along with an array of corresponding timestamps. The below shows the Arduino code which generates 2 arrays, one of timestamps and one of temperature readings in Fahrenheit. We then send these arrays to the computer through BLE by sending the temp and time values separated by a "|".


temp

On the python side, we use a similar notification handler that will store each datapoint in an array. We parse each string received by the separator "|" and store the values in their respective arrays. By printing the arrays, we can see the timestamps and corresponsing temperature values.

loop

Task 8: Discussion

Comparing the two methods of sending data, the first being sending data live as it is recorded and the second is storing the data and sending it all at once. While in the first method we are able to get real time information, the sampling rate is limited by the amount of data that can be sent over BLE. In the second method, we are able to sample data at a faster rate however cannot send data in real time. In this method, we can record data as quickly as the sensor can sample then send the entire array at once. In that require real time data and debugging, the first method is preferred. However, if we are looking for high sample sizes with lots of resolution, method 2 is preferred.


Given the Artemis has 384kB of RAM and unsigned longs take 4 bytes of space, we are able to store 96000 datapoints. In the case of storing temperature and time readings, we can store a total of 48000 sample points.


This lab was a great introduction to the Artemis board and how to communicate with it. In addition, I became familiar with the BLE communication and the general debugging process with the Artemis. Understanding how to use the serial monitor will be a good skill when debugging in the future.


Lab 2

In this lab, we will setup the IMU and connect it to our Artemis board via I2C. We will then implement chart the data from the IMU and configure all of the sensors onboard. Once configured, we experiment with various filtering techniques to improve the data quality.

Setup the IMU

Before coding up this lab, we need to setup our IMU. We begin by installing the SparkFun 9DOF IMU Arduino library. This allows us to easily interface with the IMU and get readings. We then run the basic example program. In the program we notice the AD0 pin setting. According to the datasheet, this allows us to control the I2C address of our IMU. We use a setting of 0 for our IMU. We see that with the basic example code we see the IMU data being printed to the serial monitor, confirming that our setup is correct.


Accelerometer

With our table as a guide, we can see the accelerometer readings at -90, 90 and 0, 0 (pitch, roll). We utilize the Arduino serial plotter to find the following graphs.

IMU at -90, 90 degrees

IMU at -90, 90 degrees

IMU at 0, 0 degrees

IMU at 0, 0 degrees


We can also calibrate out some noise by doing a two point conversion. This will help reduce the offset and scale errors in the accelerometer readings, providing more accurate data for our measurements and experiments. By taking the roll values at two endpoints, we can compare our sensor to an ideal accelerometer

Roll at -90 and 90 (Ideal: Purple, Real: Green)

Roll at -90 and 90 (Ideal: Purple, Real: Green)


While this offset may be due to tilting of the table or edge, we can still find the conversion factor between the two to be 1 / 0.952 with the formula:

S = (I2 - I1) / (R2 / R1)

We can now analyze our IMU data in the time and frequency domains. We first try to analyze the passive noise from the accelerometer by taking a sample and plotting the data in time and frequency. At a sampling frequency of 480 Hz, we can see little noise except for a small spike around 30-40 Hz. Using a cutoff frequency of 35 Hz, we can compute the value of alpha for a low-pass filter using the formula: alpha = cutoff_frequency / (cutoff_frequency + sampling_frequency / 2 * pi). For a cutoff frequency of 35 Hz and a sampling frequency of 480 Hz, we pick an alpha of roughly 0.2

Time domain and Frequency domain graph of IMU Pitch around 0 degrees

Time domain and frequency domain graph of IMU pitch around 0 degrees

Upon analyzing our passive noise, we add some active noise to the system by vibrating our table. Again we see relatively consistent amplitudes with a small spike around 30-40 Hz.

Time domain and Frequency domain graph of IMU Pitch around 0 degrees

Time domain and frequency domain graph of IMU pitch around 0 degrees with vibration


With the alpha value calculated, we can implement a low-pass filter to reduce the noise in our data. We use the code below, taking the previous value of pitch and multipling it by (1 - alpha) and adding the current value of pitch multiplied by alpha. We can see that the low-pass filter does a good job of reducing the noise in our data when plotting our unfiltered and filtered responses on top of each other.


                        
                        temps_lpf[i] = a * pitch + prev_lpf * (1 - a);
                        
                        
prev_lpf = temps_lpf[i];

Low-pass filter applied to IMU data

Low-pass filter applied to IMU data compared to unfiltered

Gyroscope

We can compare values from our accelerometer to our gyroscope. While the gyroscope readings are smoother, they are much more susceptible to drift. This is due to the fact that errors will accumulate on the gyroscope as it measures angular velocity. The accelerometer on the other hand will drift less however is more susceptible to noise. We use the following equations to compute our gyroscope angles for roll and pitch which we can compare to the accelerometer angles.

Roll = Roll + (GyroX * dt)

Pitch = Pitch + (GyroY * dt)

Gyroscope Roll vs Accelerometer Roll

Gyroscope Pitch vs Accelerometer Pitch

Gyroscope Roll vs Accelerometer Roll

Gyroscope Pitch vs Accelerometer Pitch

From the graphs, we can tell that the gyroscope is much more susceptible to drift. We can also see that the gyroscope is very smooth, comparable to the LPF. It does not suffer the same spikes as the raw accelerometer data. We are also able to change the sampling rate of our IMU to see the effects on the data. We can see that lowering the sampling rate, causes significant increase in gyroscope drift and decrease in data smoothness.

Gyroscope Pitch vs Accelerometer Pitch Low Sample Rate

Gyroscope Roll vs Accelerometer Roll Low Sample Rate


We can also implement a complementary filter to combine the two sensors. The complementary filter uses a weighted average of the two sensors to get a more accurate reading. We can implement this with the following code:

Gyroscope Roll vs Accelerometer Roll

This give us the following graph. We notice that our drift issues are gone, as we are able to rezero both pitch and roll to a fixed point in the graph. We weigh our filter in favor of the accelerometer which means we still have vibrations, however we see less noise than the raw pitch and roll from accelerometer and this can be fixed by further tuning the weights.

Gyroscope Roll vs Accelerometer Roll


Increasing Sampling Rate

To increase our sampling rate, we remove our IMU checks. While before we would check if IMU data was ready, now we simply execute our measurements. We also remove all measurements to the serial monitor in our main loop providing the following main loop. In this loop, we simply measure the IMU data and store it in temporary arrays. These arrays are then sent to the computer after measurements have finished.

Gyroscope Roll vs Accelerometer Roll

Over a sample size of 500 datapoints containing complementary roll and pitch along with yaw from the gyro, we are able to achieve a sampling rate of 233 samples / second. Interestingly, this sampling rate is not as high as the IMU sample rate. When looking at the output data, we rarely see repeat values. This means that the IMU is sampling faster than we can read. In addition, the datasheet states that the gyro and accelerometer have a max output datarate of 9khz and 4.5khz respectively, much faster than our sample rate.

The following code allows us to record 5 seconds of data and send all of the data from the Artemis to our computer. The 3 values we send are the complementary roll and pitch along with the gyro yaw.

We confirm that all the values are being sent correctly by printing the timestamps on our python side. We see the time starts at 28922 and ends at 33920, confirming that we are able to record a full 5 seconds of data

Array Storage

A consideration we need to make is how to store our data, whether have one large array or multiple small arrays. We decide to choose having separate arrays for each value. This reduces code complexity as instead of having to parse each value of the array when sending and reading, we can send values individually. This also allows for increased flexibility as we can easily choose to only send a specific set of values.


Considering the memory of the array, we want to minimize the space our datatypes take. For timestamps, we use 4 byte ints in order to store the top end of millisecond values. For pitch, roll and yaw, we require floats for floating point precision. These are each 4 bytes. In total, each message takes 16 bytes to store. Given the artemis has 384kB of RAM, we can store a maximum of 24000 messages. With the previously calculated sampling rate of 233 samples / second, we can store a maximum of 103 seconds of data.

Stunt!

Driving the car around is really fun! My stunt includes doing some flips and drifting the car around.


Lab 3

The purpose of this lab is to setup our ToF sensors onboard the robot. This will be done through I2C and we will be able to read the distance from the sensors. We attach to ToF sensors, configure them, and test them to ensure they are working properly.

Prelab

Before beginning this lab, we need to install the VL53L1X library through the Arduino library manager. Once installed, we must setup the I2C address of the sensors. Looking at the datasheet, we can see the default I2C address is 0x52. Because we have 2 sensors, we need to change the I2C address of one of the sensors.



To change one of the I2C addresses, we can use the XSHUT pin onboard the ToF sensor to temporary shutdown a sensor. By shutting down one ToF sensor, we can change the I2C address of the other sensor. Once both sensors restart, they will have unique addresses. This is significantly simpler than having to continuosly switch them on and off at runtime, as the addresses will just remain unique.

Wiring diagram of our Artemis with both ToF attached. We utilize the multi-qwiic connector to connect both ToF sensors to the Artemis.

We also must determine where to place the ToF sensors on the robot. We mainly want to detect for obstacles in the forward direction so we place a sensor at the front. the second sensor is placed on the right side. The majority of our turns will be right turns so having a sensor on the right side will help us detect for obstacles when turning. We will miss obstacles to our direct left however this is not a major concern.

Hardware Setup

We begin by wiring up our ToF sensors to our Artemis board. We also need to solder the XSHUT pin of one of the ToF sensors to the Artemis


We then run the example code I2C code. This will ensure that our Artemis can detect the sensor over I2C. Interestingly, we see that the detected address is 0x29, which is different than our default address. This is because the LSB of an I2C address is reserved for the read/write bit. By shifting one bit left, the address we see is 0x52, which is the default address of the ToF sensor.

Testing the Sensor

Our I2C sensor has 3 modes of sensitivity: short, medium and long. We can set the mode by using the setDistanceMode function. Long distance is optimized around 4m, medium around 3m and short around 1.3m. For our case, we will be using the short distance mode as we are operating in smaller environments and need accurate readings at short distances to avoid obstacles. The below video shows us testing the ToF sensor and reading the output in the serial monitor

We want to test the accuracy and precision of our ToF sensor. We can do this by measuring the distance of a wall at various distances. We can then compare the output of the ToF sensor to the actual distance. To make sure our measurements are precise, we repeat this process 3 times to make sure we are getting consistent readings.


Over 3 trials, we notice we get very consistent readings across 0 - 1.2m, the range of the short distance operation with an average of 100 samples per sample point. We notice a slight dropoff in precision above 1m, where trial 3 and trial 2 skew from linear.


Another experiment we can perform is to see the impact of ranging time. We can change the ranging time with the function setTimingBudget. To test this, we take 100 samples of an object at 10cm with various ranging times and measure the standard deviation of the samples. As expected, we notice that higher values of the randing time lead to more accurate results as the measured values approach the actual distance. In addition, our standard deviation decreases as the ranging time increases.

We can now hook up our second ToF sensor. As stated above, we must shut off one ToF temporarily to reprogram the other address which we achieve with the code below:

Once the devices have unique addresses, we can read data from them independently as shown in the video. We can print the values from each ToF in the same iteration of our main loop without having to switch them on and off, as they have unique addresses.

Our next test consists of trialing the maximum speed we can run our code at with the ToF. We do this by only polling the ToF when both sensors are ready, else we simply skip the measurement. At each timestep, we print the time it took between the last iteration of the code. Below we see the code to check the time between iterations along with the trace of the serial monitor.


Above we see the printed timestamps. We notice that for every 10 iterations, there is one ToF measurement which takes 15ms. All other iterations take 5ms. This gives us an average loop time of 6ms. The current limiting factor is the time it takes to read the ToF sensor, as it is not ready every iteration. Therefore, we must make sure not to block on the ToF every iteration as it will slow down our code significantly.

We can now integrate the ToF sensors with our IMU sensor along with our BLE communication to get a full set of data from our robot. We can then send this data to our computer for processing. We add a new command we can send to the robot 'COLLECT_SENSORS', which will return roll, pitch, yaw, distance from the front ToF and distance from the right ToF.


The above snippet shows the data collection loop. We make sure to only poll the ToF when both sensors are ready in order to not block the IMU which can poll much faster. This gives us a ToF graph that only has data available at certain time stamps:

We collected roll and pitch data at the same time as the ToF. We notice that this data is continuous as it does not block on the ToF sensors.

Discussion

Overall, this lab was a great introduction to the ToF sensors. We learned how to configure the I2C address of the sensors and how to read data from them. We also integrated all of the sensors into our Artemis system. This will be very useful for future labs as now we have a suite of sensors to find our position and find obstacles.


Lab 4

The purpose of this lab is to be able to drive our robot with open loop control. We connect our Artemis board to the motor drivers and test the motors with a series of simple commands.

Prelab

In order to drive our motors with enough current, we need to parallel couple the inputs and outputs of our motor drivers. This is so we can provide twice the current through one motor driver which prevents overheating and allows us to drive more current. We make sure that we are parallel coupling the inputs and outputs from the same motor driver to avoid timing issues.


The above diagram shows the wiring of our motor drivers. Both motor drivers are powered from a 3.7 V battery separate to the battery powering the Artemis. This allows us to avoid the current draw from the motors affecting the Artemis, as this may cause voltage dips which could shut off the Artemis during runtime. We make sure to choose analog capable pins to drive PWM signals to the motor drivers. The pins chosen (3, 5, 14, 15) are chosen to be close to the motor drivers when mounted. We try to keep the wiring as short as possible to avoid noise and interference, however I made sure the wires to the motors were long for easy access.

Basic Wiring and PWM

We begin by wiring up our motor drivers to our Artemis board and hooking everything up through an external power supply. We then prove the motor drivers to make sure they can regulate the input voltage with the PWM from the Artemis. To test this, I took inspiration for Nila Narayan (2024) and iterated through the range of PWM values. I repeated this for both motor drivers, the following video shows the oscilloscope trace of the PWM signal. We choose a power supply voltage of 3.7V as this is the same value as our 850 mAh battery.

With this, we can now drive the motors with the PWM signal. Initially, I tried to drive the motors from the power supply. I noticed that the motors would stutter significantly despite the current limit being much higher than needed. I believe this is because the power supply limited the current from rapidly switching when using PWM. I then switched to using the battery and the motors were able to be driven. To drive the motor in both directions, I utilized the following block of code. I drove the motors in one direction by writing to one channel before switching to the other direction.

                        
                            void setup() {
                                pinMode(r1, OUTPUT);
                                pinMode(l1, OUTPUT);
                              
                                pinMode(r2, OUTPUT);
                                pinMode(l2, OUTPUT);
                                drive(0,50,3000);
                                back(0, 50, 3000);
                              }
                              
                              void drive(int r_pwm, int l_pwm, int time)
                              {
                                analogWrite(l1, l_pwm);
                                analogWrite(l2, 0);
                                analogWrite(r1, r_pwm);
                                analogWrite(r2, 0);
                                delay(time);
                                //Kill motors
                                analogWrite(r1, 0);
                                analogWrite(r2, 0);
                                analogWrite(l1, 0);
                                analogWrite(l2, 0);
                              }
                              
                              void back(int r_pwm, int l_pwm, int time)
                              {
                                analogWrite(l1, 0);
                                analogWrite(l2, l_pwm);
                                analogWrite(r1, 0);
                                analogWrite(r2, r_pwm);
                                delay(time);
                                //Kill motors
                                analogWrite(r1, 0);
                                analogWrite(r2, 0);
                                analogWrite(l1, 0);
                                analogWrite(l2, 0);
                              }
                        
                        

We drive the motors in different directions by writing to the appropriate channels. We can see the result in the below video:

Now that we can drive the motors in both directions, we need to confirm that both motors function from battery power. We drive both of the motor controllers independently to make sure we have full functionality. The below videos display each side of the robot being driven.

Car Assembly

We can now assemble our car. I wanted to make the multi-qwiic connector as central as possible as all of the sensors and Artemis need to be attached to it. I placed it in the center of the car and assembled the car around it. I kept the 850mAh battery in the battery compartment but left the 650mAh battery accessible in case I need to take out the Artemis. The motor drivers are placed on the sides of the respective motors and I made sure to keep the ToF sensors as clear as possible to avoid them detecting wires. The Artemis is placed on the back for each USB access.

Car Image 1
Car Image 2

Running the Car

We can now run the car. We start by running the car with both motors at the same speed. We notice that when trying to drive the car in a straight line, the car will drift to the right. This can be due to a number of factors such as the weight distribution of the car, or mechanical friction differences of each side.

To counter this, we add a calibration constant to our motors. This will scale the speed of each side to counteract the drift displayed in the code below.

                            
                                analogWrite(l1, 50 * drive_diff);
                                analogWrite(l2, 0);
                                analogWrite(r1, 50 * (1 / drive_diff));
                                analogWrite(r2, 0);
                            
                            

In our testing, the best value for this constant is around 1.07. We can see the result of this in the below video. We place a strip of cloth on the ground and see that the car is able to drive straight. While this value was found with trial and error, it would likely be better to determine the value of the constant with a more systematic approach such as measuring the distance traveled by each side of the car.

PWM Limitations

Another thing we test for is the lower PWM limit. We test for 2 lower limits for PWM, the first being the minimum PWM value that will overcome static friction. To determine this value, we simply ramped up the PWM until the car started moving. In my testing, I found that a PWM value of around 45 was the minimum value that would overcome static friction. Note that my testing was done on carpet and I had already been running the car for a while, so the battery was likely at a lower voltage. The actual value may be lower.

The second lower limit is the PWM value to continue moving after coasting. I determined this in a similar manner to the first, however I would give the robot a push. Doing so, I found a PWM value of around 30 was enough to continue moving after coasting.

Untethered Control

We can now control the car untethered. We implement 5 new commands through BLE: 'DRIVE_FORWARD', 'TURN_R', 'TURN_L', 'FAST_R', 'FAST_L'.

                            
                            
      case DRIVE_FORWARD:
      {
          analogWrite(l1, 50 * drive_diff);
          analogWrite(l2, 0);
          analogWrite(r1, 50 * (1 / drive_diff));
          analogWrite(r2, 0);

          delay(time_run);
          //Kill motors
          analogWrite(r1, 0);
          analogWrite(r2, 0);
          analogWrite(l1, 0);
          analogWrite(l2, 0);
          break;
      }

      case TURN_R:
      {
          analogWrite(l1, 240);
          analogWrite(l2, 0);
          analogWrite(r1, 0);
          analogWrite(r2, 240);

          delay(1200);
          //Kill motors
          analogWrite(r1, 0);
          analogWrite(r2, 0);
          analogWrite(l1, 0);
          analogWrite(l2, 0);
          break;
      }

      ....

      case FAST_L:
      {
          analogWrite(l1, 0);
          analogWrite(l2, 0);
          analogWrite(r1, 0);
          analogWrite(r2, 240);

          delay(1200);
          //Kill motors
          analogWrite(r1, 0);
          analogWrite(r2, 0);
          analogWrite(l1, 0);
          analogWrite(l2, 0);
          break;
      }

                            
                            

The first command will drive the car forward at a given speed for a given time. The second and third commands turn the car for a set period of time in the respective direction. The last two commands are for fast turns while going forward. The below video demonstrates control of the car through BLE. This was near the end of my testing so the battery was likely at a lower voltage causing small stutters in the movement.

Discussion

Overall, this lab was heavily focused on hardware and wiring. The main challenge I faced was an issue with one of my motor drivers, which was shorted when being powered. I determined this as when plugging in the 850mAh battery, the motor driver would cause the motor to spin. Initially I thought the Vin and output pins were shorted however after probing with a multimeter, I detected no short. Interestingly, when I drove the motor driver with a PWM signal, the PWM signal would dominate the Vin signal and the motor would follow the PWM value. This allowed me to continue testing despite the issue.


Lab 5

The purpose of this lab is to be able to drive our robot with basic closed loop control. We implement a linear PID controller to drive the robot in a straight line. This is done by checking the ToF sensors and attempting to drive the robot into a wall and stop just before hitting the wall.

Prelab

We first implement a PID controller command that can be run over BLE. We do this by looping for a set period of time and computing the proper PWM value with a function PID, which takes in our current distance to the wall and a desired setpoint. The PID function is a simple linear PID controller that computes the error between the current distance and the setpoint. We then store the current distance, PID value and timestamp in arrays to later be sent over BLE. This is done after we stop driving the motors in order to avoid delays in the loop.

                        
                                while (millis() - start_time < time_run) {
                          
                                    if (distanceSensor.checkForDataReady()) {
                                      distance = distanceSensor.getDistance();  //Get the result of the measurement from the sensor
                                      distanceSensor.clearInterrupt();
                                      distanceSensor.stopRanging();
                                      distanceSensor.startRanging();  //Write configuration bytes to initiate measurement

                                    }
                                    curr_pwm = PID(distance, 300);
                                    if (i < max_arr) {
                                      tof_ranges[i] = distance;
                                      pid_values[i] = curr_pwm;
                                      timestamps[i] = millis();
                                    }
                                    i++;
                                  }
                            
                            

We implement another command in order to easily tune the PID controller over BLE. This command takes in 3 values, the P, I and D values sent from our computer. We then set the PID controller to these values allowing us to tune much faster

                        
                                float f_a, f_b, f_c;
                                // Extract the next value from the command string as an integer
                                success = robot_cmd.get_next_value(f_a);
                                if (!success)
                                  return;
                        
                                // Extract the next value from the command string as an integer
                                success = robot_cmd.get_next_value(f_b);
                                if (!success)
                                  return;
                        
                                success = robot_cmd.get_next_value(f_c);
                                if (!success)
                                  return;
                                kP = f_a;
                                kI = f_b;
                                kD = f_c;
                            
                            

On the python side, we utilize the notification handler from before to receive the data from the robot. To send the new commands, we simply have to extend our command list to inclde the new enum values and send the commands in the same manner as before.

P Control

We first test the P control of the robot. We set the I and D values to 0 and only tune the P value. We set the setpoint to 300mm(1ft) and drive the robot forward. We can see the result of this in the below video. Initially, I set the P gain too high at 0.13 as this would net 255 PWM at 2m. The robot overshot significantly before stalling. I lowered my initial P gain to be around 0.09 and I would begin the robot around 2000mm from the wall, which would net a PWM of 180.

With just P, we overshoot our setpoint by around 100mm as seen in the below graph. This is expected as we are not outputting enough PWM once we are close to the setpoint.

PI Control

To mitigate this, we add an I term to our PID controller. This will help us compensate for steady state error. In addition to the I term, we also normalize our PWM to be within the range 40 - 255. This sets our lower bound to be the minimum PWM value that will overcome static friction found in lab 4. For our I term, the rate variable corresponds to the time between each iteration of the loop which I measured to be 1.3ms. At 2m, an I of .001 would integrate at around 2 PWM per iteration which I deemed to be a good starting point. With the integral and a normalized PWM, we get a more aggressive response from the robot.

                        
                            
                            int PID(float curr, float setpoint) {
                                err = curr - setpoint;
                                integral += err * rate;  // Accumulate the integral term
                            
                                // Compute PID Output
                                pwm = kP * abs(err) + kI * integral;
                            
                                // Normalize PWM output
                                int normalized_pwm = map(pwm, 0, 255, 35, 255);
                                if (normalized_pwm > 200) normalized_pwm = 200;
                            
                                // Drive motors based on error sign
                                if (err < 0) {
                                drive(normalized_pwm, normalized_pwm);
                                } else {
                                back(normalized_pwm, normalized_pwm);
                                }
                            
                                return normalized_pwm;
                            }
  
                        
                        

After some tuning, With the I term and normalized PWM values, we stop nearly instantly at the setpoint. This was with a P gain of 0.07 and integral of 0.0005. We can see the result of this in the below graph. We notice that we are able to stop within 10mm of the setpoint.

Interpolation

Due to the slow speed of the ToF measures, we see our PID loop is only able to compute a new value when the ToF samples depite the two operations being decoupled. To mitigate this, we can linearly interpolate between the last two ToF samples to get an estimate distance despite not having a measure. We utilize the following code, which calculates thes slope between the last two samples and if there is not new data, we estimate the distance based on the slope.

                        
                            
                            while (millis() - start_time < time_run) {
                                new_data = false;
                                if (distanceSensor.checkForDataReady()) {
                                  new_data = true;
                                  distance = distanceSensor.getDistance();  // Get the result of the measurement from the sensor
                                  distanceSensor.clearInterrupt();
                                  distanceSensor.stopRanging();
                                  slope = (float)(distance - last_dist) / (float)(millis() - prev_measure_time);
                                  last_dist = distance;
                                  prev_measure_time = millis();
                                  distanceSensor.startRanging();  // Write configuration bytes to initiate measurement

                                }
                      
                                if(!new_data)
                                {
                                  distance = distance + slope * dt;
                                }
                               // Serial.println(distance);
                      
                                curr_pwm = PID(distance, 300);
                                // dt = millis() - prev_time;
                                // prev_time = millis();
                      
                                if (i < max_arr) {
                                  tof_ranges[i] = distance;
                                  pid_values[i] = curr_pwm;
                                  timestamps[i] = millis();
                                }
                      
                                dt = millis() - prev_time;
                                prev_time = millis();
                                i++;
                              }
                        
                        

When running with just P control value of 0.09, we see that the system is much more responsive than before and the graph is much smoother. The smoother graph causes the robot to change the PWM more frequently leading to a more responsive system. This was surprising to me as I was slightly more responsiveness however not significantly. I believe this trial was affected by me charging the battery beforehand.

PID Control

Naturally, we now want to increase the speed of the robot. To do this, I added a D term to have more control over the loop. When implementing the D term, I made sure to add a LPF to the derivative term in order to prevent noise.

                        
                            
                            int PID(float curr, float setpoint) {
                                err = curr - setpoint;
                                integral += err * rate;  // Accumulate the integral term
                            
                                // **Derivative Term with Low-Pass Filter**
                                float raw_derivative = (err - prev_err) / rate;
                                derivative = d_filter_alpha * raw_derivative + (1 - d_filter_alpha) * prev_derivative;  // LPF applied
                            
                                // Compute PID Output
                                pwm = kP * abs(err) + kI * integral + kD * derivative;
                            
                                // Normalize PWM output
                                int normalized_pwm = map(pwm, 0, 255, 35, 255);
                                if (normalized_pwm > 200) normalized_pwm = 200;
                            
                                // Drive motors based on error sign
                                if (err < 0) {
                                drive(normalized_pwm, normalized_pwm);
                                } else {
                                back(normalized_pwm, normalized_pwm);
                                }
                            
                                // Store previous values for next iteration
                                prev_err = err;
                                prev_derivative = derivative;
                            
                                return normalized_pwm;
                            }
                        
                        

Despite having PID control, I found my best results were with just P and I control. I attempted to use a heuristic procedure to tune the system by increasing kp until oscillation then decreasing by a factor of 2-4, repeating with ki and then using kd to respond to disturbances. I found I had best results when removing the kd term. I believe I need to do further work on implementing kd such as tuning the LPF alpha value. I noticed some stuttering with kd which I believe to be due to noise. While not particularly fast, my smoothest results occured with kp = 0.04 and ki = 0.0005 as shown in the video below (while suffering from a bit of overshoot, the system was able to recover fast. I would likely be able to improve by handling integrator windup):

Speed and Sample Rate

With all of the PID functions implemented, we now can determine our sample rate. By measuring the amount of timestamps over a period of 6 seconds, we can determine the sample rate of our PID loop.

We can see that our PID loop is running at around 335 Hz. This gives us a new PID calculation every 3ms. Since the ToF is decoupled, we also measure its sample time as well. By counting the amount of times we get a new ToF sample over 6 seconds, we find the ToF to sample once every 100ms. The default sample rate of the ToF is around this value as it takes 100ms to sample and has 100ms between measurements. While this could be sped up by changing the ranging time and time between measurements, I left these values at default as I was able to get stable values when interpolating between samples. In terms of car speed, we measure our max speed to be around 0.88m/s from the graph.


Lab 6

The previous lab allowed us to control the robot in a straight line using PID control. This lab will allow us to control the orientation of the robot using PID control. To do this, we utilize the IMU onboard our robot to estimate our yaw.

Prelab

We change our bluetooth setup in this lab to be able to process bluetooth commands on the fly. This means we want to be able to change the setpoint while the controller is running as well as the PID gains on the fly. To do this, I changed the design of my code where instead of running the PID loop when triggered by a command, I run the PID loop in the main loop. I then utilize various commands to change global variables such as setpoint and PID values as well as start and stop recording data. We maintain the below main loop which has various flags such as 'recording', which begins collecting data. In the future, I can add more flags that can be set by commands to change the behavior of the robot.

                        
                        
                            while (central.connected()) {
                                poll_dmt();
                          
                                curr_pwm = PID_Turn(yaw, setpoint);
                                if(recording && i < max_arr)
                                {
                                  timestamps[i] = millis();
                                  pid_values[i] = curr_pwm;
                                  yaw_arr[i] = yaw;
                                  i++;
                                }
                                else if(i >= max_arr)
                                {
                                  i = 0;
                                  recording = false;
                                }
                          
                                // Send data
                                write_data();
                          
                                // Read data
                                read_data();
                              }
                        
                        

We implemented the following set of commands for this lab: START_RECORD, CHANGE_SETPOINT, SEND_BACK, TUNE_PID. START_RECORD will begin recording data to be sent over BLE. CHANGE_SETPOINT will change the setpoint of the PID controller. SEND_BACK will send back the data recorded by the robot. TUNE_PID will change the PID values of the controller.

On the python end, we are able to send commands on the fly as shown below. This is much faster to tune the PID controller as I can rapidly see the changes on the robot. we utilize the notification handler from before to receive the data from the robot. We are able to parse the values sent from the robot into 3 arrays which we can then use to plot using matplotlib. This allowed me to tune the PID controller graphically and see the results of my tuning.

P Control Graph

The workflow this lab was as follows: Start recording, set my PID values, set my setpoint, then send back my recording

Getting Yaw

Initially I simply integrated my gyroscope values in order to estimate the yaw of the robot. A consequence of this was I noticed significant drift from my IMU. While I tried offsetting this with a constant linear offset, I would still notice drift over time and if I turned the robot significantly with PID control, this would only get worse. Instead, I chose to utilize DMP to obtain yaw values of my robot.

Thank you to Stephan Wagner (2024) for providing a useful guide to setting up DMP. DMP fuses all of the sensors onboard the IMU in order to return accurate estimates of the yaw that suffers from little drift. Looking at the datasheet, we find that DMP runs at 1.1khz. We can further specify the output data rate with myICM.setDMPODRrate, where we set the ODR to the maximum of 1.1khz (same as DMP rate). Looking at our gyroscope, it has a programmable range between 250 to 2000 degrees per second. This can be changed by editing the GYRO_CONFIG register however DMP sets the gyro to max by default which is more than enough sensitivity for our use case. Something we need to watch out for with DMT is avoiding filling up the FIFO, which will cause crashes. To do this, we constantly poll the DMT in our main loop.

                        
                            enable_dmt();
                            // While central is connected
                            while (central.connected()) {
                              poll_dmt();                        
                              // Send data
                              write_data();
                              // Read data
                              read_data();
                            }

                        
                        

P Control

We are able to obtain fairly good results with just a P term. The below video shows a controller with only a P term of 3.4. For this part, I returned to the old code structure which only runs the PID loop when triggered by a command. This is because I wanted to be able to have a more controlled testing environment where I could set up the robot consistently each time and tune:

As we can see in the video, the controller initially is able to quickly turn to the setpoint however only a P controller struggles near the setpoint and has a lot of steady state error. The below graph shows a setpoint of 30 degrees:

P Control Graph

A fun thing we can do with just P control is to continuously run our P controller and push the robot around. We can see that the robot is able to maintain its orientation despite being pushed around.

PID Control

For this lab I implemented full PID control. Initially I planned on using PI however after seeing how quickly the robot turned with sole P control, I decided it would be best to add a derivative term to dampen out significant changes in slope. I made sure to include a LPF after looking at the raw DMT graph. While the graph looks smooth from afar, I wanted to avoid the derivative term jumping from the small bumps in the curve. I chose a fairly high alpha of 0.7 as there were not significant bumps in the graph.

PID Control Graph
                            
                                // **Derivative Term with Low-Pass Filter**
                                float raw_derivative = (err - prev_err) / rate;
                                derivative = d_filter_alpha * raw_derivative + (1 - d_filter_alpha) * prev_derivative;  // LPF applied
                                
                            

The below displays our PID controller. One thing we need to do is to handle wraparound of the error, where our yaw value can be between 0 and 360. We don't want to have a large error when the robot is facing the opposite direction. To do this, we use the fmod function to wrap the error around 360 degrees. We then subtract 180 to get the error between -180 and 180 degrees. Similar to lab 5, we normalize our pwm between 35 and 255 to avoid stalling the motors. To avoid derivative kick, I changed the derivative term to be derivative of the yaw (process variable) rather than error. This means that upon changing setpoints, we will not experience massive kick.

                            

                                int PID_Turn(float curr, float setpoint) {
                                    // Compute angular error with wraparound handling
                                    float err = fmodf(setpoint - curr + 540.0, 360.0) - 180.0;
                                    integral += err * rate;  // Accumulate the integral term
                                  
                                    // **Derivative Term with Low-Pass Filter**
                                    float raw_derivative = (yaw - prev_yaw) / rate;
                                    derivative = d_filter_alpha * raw_derivative + (1 - d_filter_alpha) * prev_derivative;  // LPF applied
                                  
                                    // Compute PID Output
                                    //pwm = kP * abs(err) + kI * integral + kD * derivative;
                                  
                                    pwm = kP * err + kI * integral + kD * derivative;
                                  
                                    // Normalize PWM output
                                    int normalized_pwm;
                                    if (pwm < 0) {
                                      if (pwm < -255) {
                                        pwm = -255;
                                      }
                                      normalized_pwm = map(pwm, -255, 0, -255, -35);
                                    } else {
                                      if (pwm > 255) {
                                        pwm = 255;
                                      }
                                      normalized_pwm = map(pwm, 0, 255, 35, 255);
                                    }
                                    diff_drive(normalized_pwm, -normalized_pwm);
                                    // Store previous values for next iteration
                                    prev_yaw = yaw;
                                    prev_derivative = derivative;
                                  
                                    return normalized_pwm;
                                  }
                            
                            

After tuning the PID values following the heuristic procedure, I found that I had best results with a P value of 2.8, I value of 0.014 and D value of 0.001. Giving us little steady state error as shown below (setpoint is 30 degrees):

PID Control Graph

On the fly setpoint changes

The below video shows us changing the setpoint on the fly to make sure we don't suffer from derivative kick. While we don't see much kick from the graph, we unfortunately do not hit our setpoints accurately. This could be resolved with more tuning however this was late into my testing and my battery had lost charge. Looking at the graph, I believe I could have benefited from a higher derivative to reduce the amount of kick that the P term gives. In addition, I likely should have reduced my proportional term to reduce the amount of overshoot. I also believe that it may be beneficial to reset the integral term when changing setpoints to avoid integrator windup.

PID Tuning Graph

Discussion

This lab was a great exercise in implementing PID control for orientation. The use of DMP for yaw estimation significantly improved the accuracy and reduced drift compared to integrating raw gyroscope data. The ability to tune PID parameters on the fly and change setpoints dynamically made the process much more efficient and allowed for rapid testing and iteration.

One challenge I faced was the surface I was testing on. Initially, I was testing on a carpet which made it extremely difficult to get consistent values as the differential drive could not turn. I then switched to glass which was easier to turn on however I believe I would have benefited from a more slippery surface which would allow the robot to skid easier as it turned.

Lab 7

Lab 7 has us implement a Kalman filter on the ToF data. This allows us to estimate the distance of the wall more accurately, allowing us to execute at a faster rate. We accomplish this by first estimating a model for our robot and testing in python. Once we have tuned a filter, we then plug it onto the robot.

Task 1. Estimate Drag and Momentum

The first task is to estimate the drag and momentum of the robot. We can accomplish this by taking the step response of the robot. First, we need to chose a step response u(t) that is within 50% - 100% of our max PWM. I chose to use 135, which is around 50% of the max PWM. We then run the robot and measure the ToF until we have reached steady state velocity.

The above graph shows the results of our step response. Unfortunately, our computed velocity is very noisy. To get a better estimate, I manually found the slope when the ToF became linear. We can then use the data to estimate the drag and momentum of the robot. To get these values, we utilize the following equations.

Drag Estimation Momentum Estimation

With a step response of 135, we obtain the following constants:

  • Steady State Velocity: 2190 mm/s
  • Steady State Time: 0.6s
  • 90% Rise Time: 0.54

Plugging these values into Python we obtain the following values for drag and momentum:

Drag and Momentum Estimation


Task 2. Initialize KF (Python)

Our first task is to find our A and B matrices for the Kalman filter. By using the A and B matrices from lecture, we are able to plug in the values we have found. Because our ToF ranges every 54ms, we chose Delta_t of 0.054s.

                            
                                Delta_t = 0.054 # ToF ranging occurs every 54ms
                                n = 2 # 2 dim
                                
                                # A and B matrices
                                A = np.array([[0,1],[0,-d/m]])
                                B = np.array([[0],[1/m]])
                                
                                Ad = np.eye(n) + Delta_t * A  #n is the dimension of your state space 
                                Bd = Delta_t * B
                            
                        

This allows us to obtain the following A and B matrices:

A and B Matrices

We can then determine our C matrix and state matrix. Because we are only measuring distanec to the wall, we set the C matrix as [-1,0] (since we measure the negative distance to the wall). We then start the state matrix at our initial distance from the wall.

C and X Matrices

With our matrices, we then need to characterize our noise. For our sigma_3 (measurement noise) we utilize our data from lab 3 which is a standard deviation of 15mm. For our sigma_1 and sigma_2 (process noise), we can use the formula to estimate from lecture:

Sigma Estimation

This gives us the following values for our noise matrices:

Noise Matrices

Task 3. Implement Kalman Filter in Jupyter

With our matrices, we can now implement the Kalman filter in Python. The below code shows the implementation of the Kalman filter:

                                    
                                        def kf(mu,sigma,u,y):
    
                                            mu_p = A.dot(mu) + B.dot(u) 
                                            sigma_p = A.dot(sigma.dot(A.transpose())) + sig_u
                                            
                                            sigma_m = C.dot(sigma_p.dot(C.transpose())) + sig_z
                                            kkf_gain = sigma_p.dot(C.transpose().dot(np.linalg.inv(sigma_m)))
                                        
                                            y_m = y-C.dot(mu_p)
                                            mu = mu_p + kkf_gain.dot(y_m)    
                                            sigma=(np.eye(2)-kkf_gain.dot(C)).dot(sigma_p)
                                        
                                            return mu,sigma
                                    
                                

By looping through our real ToF data, we are able to call the Kalman filter function to get a estimate. We first normalize our PWM by 135 to normalize our step response. We also need to set an initial covariance matrix and state. Our initial covariance is set to 225, the same as our measurement uncertainty and our initial state is our first ToF measure.

                            
                                            pwms = np.array(pwm_f) / 135.0
                                            sig = np.array([[225, 0], [0, 225]])
                                            x = np.array([[-tof_f[0]], [0]])

                                            kf_data = []
                                            for pwm, tof in zip(pwms, tof_f_filtered):
                                                x, sig = kf(x, sig, [[pwm]], [[-tof]])
                                                kf_data.append(x)
                            
                        

With our initial noise values, we are able to get the following KF curve:

Kalman Filter Curve

To have a closer fit, we drop the measurement noise sigma down to 15:

Kalman Filter Curve

For our best fit to smooth out some more noise, we increase our process noise up to 65mm. This gives us our most balanced fit, filtering out some of the noise while being responsive to new measurement data.

Kalman Filter Curve

Task 4. Implement Kalman Filter on Robot

With our Kalman filter tuned, we can now implement it on the robot in Arduino. By utilizing the linear algebra library, we are able to perform matrix operations for our filter algorithm. The below code shows the implementation of the Kalman filter:

                            
                                void kf(BLA::Matrix& mu, BLA::Matrix& sigma, BLA::Matrix<1, 1> u, BLA::Matrix y) {
                                    // Prediction step
                                    BLA::Matrix mu_p = A * mu + B * u;
                                    BLA::Matrix sigma_p = A * sigma * (~A) + sig_u;
                                
                                    // Kalman gain computation
                                    BLA::Matrix sigma_m = C * sigma_p * (~C) + sig_z;
                                    Invert(sigma_m);
                                    BLA::Matrix K = sigma_p * (~C) * sigma_m;
                                
                                    // Correction step
                                    BLA::Matrix y_m = y - C * mu_p;
                                    mu = mu_p + K * y_m;
                                    sigma = (iden - K * C) * sigma_p;
                                }
                            
                        

To run the filter in real time, we need to constantly feed in new ToF data. Because the ToF data only updates every 54ms, we are often passing in the same measurements. This led to the filter looking following a bit more of a stepped curve. We are able to run the following code every loop to computer new KF values for our PID control where kf_error is our computed distane and distance is our ToF value:

                            
                                kf(mu, sigma, (float)curr_pwm / 220.0, -distance);
                                curr_pwm = PID(mu(0,0), 300);
                                       
                        

We still see that the data is stepped as we pass in measured data when we are not supposed to.

Kalman Filter Curve

I realized that I made a mistake where I called the KF every iteration of the control loop. This meant that the KF would be ran even if I did not have new ToF measurement. I then changed the code to only run the KF when I had a new ToF measurement. This netted the below graph. This however was still stepped as my ToF sensor was still returning identical values on seperate iterations but this was not an issue with the filter.

Kalman Filter Curve

Discussion

This lab was a great exercise in implementing the Kalman Filter. Unfortunately, I was unable to remove much of the stepped behavior on the real robot. This is because the ToF kept returning the last measured value. I believe that I could have fixed this by manually checking if the ToF value was identical to the last value or increasing the ToF intermeasurement rate. For the stunts lab, I plan to return to using linear interpolation to speed up the data as I believe this will be a simpler setup.

Lab 8

This lab, we do stunts with the robot! The stunt I chose to perform was the flip, as I believed that this was better suited for my robot. I had put the ToF on the heavier side of the robot which would help with the flip.

Implementing the flip in code

In order to perform a flip, I needed to implement a new command in code. First I removed my PID controller as we do not want to drop speed into the flip. Initially, I utilized the braking functionality to try and flip. In code, I would drive forward until my distance to the wall brok a certain threshold. I then brake for a certain amount of time, then reverse for the same amount of time it took to drive to the wall. With my intial code, I was unable to flip the robot as I believed I was not switching directions fast enough.

                
                case FLIP_DRIVE:
                {
                    start_time = millis();
                    while(distance < DISTANCE_THRESHOLD){
                        // Check and interpolate distance, drive forward
                    }

                    brake();
                    delay(300);

                    time_to_drive = millis() - start_time + 100; // Time it took to drive forward, it will be slightly longer back due to the flip
                    start_time = millis();

                    while(millis() - start_time < time_to_drive)
                    {
                        back(255,255);
                    }
                }
                
                

To compensate, I changed a couple things in hardware and software. First, I moved the position of the Artemis battery. Initially, it was in the back of the robot however I moved it to be in the battery compartment. This shifted the weight distribution forward causing the robot to be more unstable.

Robot Flip

I also changed the code to use a different drive method. Instead of running the motors in the fast decay mode (coast), I switched to using the slow decay mode (brake). This allowed me to stop the motors much quicker and switch directions faster. In addition, the robot would not coast as much after I stopped the motors.

Robot Flip
                    
                                                // Slow decay mode
                                                void drive(int r_pwm, int l_pwm) {
                                                    analogWrite(l1, 1);
                                                    analogWrite(l2, l_pwm); 
                                                    analogWrite(r1, 1);
                                                    analogWrite(r2, r_pwm);
                                                }
                                                    
                                                    // Slow decay mode
                                                    void back(int r_pwm, int l_pwm) {
                                                    analogWrite(l1, l_pwm);
                                                    analogWrite(l2, 1);
                                                    analogWrite(r1, r_pwm);
                                                    analogWrite(r2, 1);
                                                }
                    
                    

These changes still would not allow me to flip however. I then changed the code to instantly switch directions rather than braking the motors. In this implementation, I would drive forward at max PWM then instantly switch to reverse at max PWM. This allowed me to get the robot to flip. Note that I set the distance threshold to be 900mm. This is because the sticky mat I was flipping on began around 800mm from the wall and I wanted to add a bit of extra room.

                        
                            case FLIP_DRIVE:
                            {
                                start_time = millis();
                                while(distance < DISTANCE_THRESHOLD){
                                    // Check and interpolate distance, drive forward
                                }

                                time_to_drive = millis() - start_time + 100; // Time it took to drive forward, it will be slightly longer back due to the flip
                                start_time = millis();

                                while(millis() - start_time < time_to_drive)
                                {
                                    back(255,255);
                                }
                            }
                      
                        
                        

Successful Runs

The below videos show successful runs of the flip on the sticky mat.

The below shows the interpolated ToF and outputted PWM values. I used linear interpolation to update my distance rather than the KF as I believed that this would allow me to avoid issues with tuning. Note that after flip, I cut the ToF sensor because the robot had flipped and was no longer facing the wall. The PWM values outputted also are negated as the robot must change direction.

Robot Flip Robot Flip

Blooper

I had a lot of crashes while testing the flip but this was a run where I accidentally inverted the motors after switching to slow decay mode. Chaos ensued after the cut.

Discussion

This lab was a lot of fun to implement the stunts on the robot. I did note that the robot would sometimes flip off at an angle. I believe this is due to the motor imbalances. I had previously calibrated these imbalances however I removed this calibration to be able to run the motors at max PWM for this lab.

Lab 9

In this lab, we will map our a static room using the ToF sensor. We will use this map in future labs for localization and navigation. To accomplish this, we place the robot in multiple positions and perform a 'scan' of the room by rotating the robot. To perform the scan, we utilize our angular PID controller to rotate the robot to various setpoints.

Task 1: Control

We choose to use orientation control to rotate the robot to various setpoints. To do this, we utilize DMP to get the yaw of the robot as this is more accurate than using the gyroscope. To tune the PID controller for small increments, we significantly crank the P value to 8, much higher than the previous labs. This gives us a controller that looks like this when scanning:

The above video shows the robot scanning in a circle. As we are using DMP to get the yaw, suffer from less drift. We do notice that the robot cannot perfectly turn in place, which would cause us to get slight errors in our ToF data. This however is very slight and assuming a 4x4m empty room, we would likely only be off by at max 10cm. I found this as I noticed the robot would drift at max around 10cm from the start point. In average cases, we would be off by around 5cm. In addition, as the ToF sensor is not centered on the robot, we need to add a 7cm offset to the measurements.

Room Scan Visualization

We record the yaw values from our scan. As we can see from the graph, we are able to control the yaw in relatively consistent angle intervals. Note that because we are ranging our ToF sensor in between turns, we do not poll the DMP and therefore the yaw values do not always update. We notice that we are consistently able to hit setpoints to a reasonable degree.

Room Scan Visualization

Task 2: Read out Distances

There are multiple marked positions in the lab (-3,-2), (5,3), (0,3) and (-5,-3). We perform our scan in each of these positions with the following code:

                    
                        while (scans < 15) {
                            poll_dmt();
                            start_time = millis();
                            while(millis() - start_time < 1000)
                            {
                              curr_pwm = PID_Turn(yaw, setpoint);
                              poll_dmt();
                            }
                            diff_drive(0,0);          
                            int dist = 0;
                            for(int i = 0; i < 3; i++)
                            {
                              while(!distanceSensor.checkForDataReady())
                              {
                                delay(1);
                              }
                              dist += distanceSensor.getDistance();
                            }
                  
                            dist /= 3;
                  
                            yaw_arr[scans] = yaw;
                            tof_ranges[scans] = dist;
                  
                            setpoint = (setpoint + 25) % 360;
                            integral = 0.0; // reset integral term for next pid turn
                            scans ++;
                          }
                    
                

We rotate the robot in 25 degree increments for 14 iterations. For each setpoint, we wait until we are stationary and take 3 ToF samples and average them to get a less noisy reading. Because we do not fully hit the setpoint everytime, we return the current yaw of the robot rather than the setpoint when sending back data. This allows us to account for some of the PID error as we are not fully reliant on hitting the setpoint perfectly. Below shows our individual polar plots for each position. For each position, we take 14 samples. While these can't show the full map, they give a good indication of the surrounding features.

Room Scan Visualization Room Scan Visualization Room Scan Visualization Room Scan Visualization

In order to test precision, we can run the robot in a scan twice. This allows us to overlay the two scans to see how well we can repeat the scan. From the below graph, we can see that we are able to reasonably repeat the scan within the error range of the ToF sensor. Note that the blue trial has a point that is significantly closer. This is because there was another robot running trials at the time.

Room Scan Visualization

Task 3: Merge Readings

In order to merge the readings, we can take advantage of the homogenous transformation matrix.

Room Scan Visualization

We can keep the rotational component as the identity matrix as I made sure to keep the robot facing the same direction when beginning scans. The translation component is then simply the displacement from the origin. We can keep the Z component at 0 and vary X and Y depending on the point we are at. Note that I used the 2D transformation matrix as we are not operating in Z. We know that the floor tiles marking the points are in feet, so we can convert them to mm to get our translation. We then apply the transformation to the points in the local maps. The below code shows our transformation matrix:

                            
                                # Relative to origin
                                cx_mm = cx_ft * 304.8 # Convert ft to mm
                                cy_mm = cy_ft * 304.8
                                dx = cx_mm - origin_cx_mm
                                dy = cy_mm - origin_cy_mm
                            
                                # Translation matrix
                                T = np.array([
                                    [1, 0, dx],
                                    [0, 1, dy],
                                    [0, 0,  1]
                                ])
                            
                                # Apply transformation
                                global_map = T @ local_map 

                            
                        

Upon plotting the data, we receive a map that looks like this. Note that I needed to invert the yaw values to treat clockwise as positive rotation as this was the way DMP was oriented:

Room Scan Visualization

Task 4: Convert to Line-Based Map

After looking at my merged map, I notice that the data looks slightly crooked. Upon inspection of my data, I realized that the angle I was using to zero the robot was not aligned with the DMP. To fix this, I simply added a offset to all of the yaw data netting the below graph.

Room Scan Visualization

We can then draw an estimate of the walls and obstacles over our scatter plot. There are definitely some outliers in the data however many of them are due to other robots in the lab at the time. The outliers that overshoot are likely due to noise and sensor error.

Room Scan Visualization

Discussion

After reviewing my maps, I realized that I should have taken more datapoints per scan. I only took 14 samples perscan and I believe that this was not enough to get strong estimates of the room, especially when scanning points that were far away. While this made PID control easier to tune, this also made the data much more spaced than I would have liked.

Lab 10

This lab is focused on implement localization with the Bayes filter in a simulated environment. We do this by implementing a set of functions that allow us to perform filtering.

Compute Control

                    
                        def compute_control(cur_pose, prev_pose):
                            """ Given the current and previous odometry poses, this function extracts
                            the control information based on the odometry motion model.
                        
                            Args:
                                cur_pose  ([Pose]): Current Pose
                                prev_pose ([Pose]): Previous Pose 
                        
                            Returns:
                                [delta_rot_1]: Rotation 1  (degrees)
                                [delta_trans]: Translation (meters)
                                [delta_rot_2]: Rotation 2  (degrees)
                            """
                            delta_rot_1 = mapper.normalize_angle(math.atan2((cur_pose[1] - prev_pose[1]), (cur_pose[0] - prev_pose[0])) - prev_pose[2])
                            delta_trans = math.sqrt((cur_pose[1] - prev_pose[1])**2 + (cur_pose[0] - prev_pose[0])**2)
                            delta_rot_2 = mapper.normalize_angle(cur_pose[2] - math.atan2((cur_pose[1] - prev_pose[1]),(cur_pose[0] - prev_pose[0])))
                        
                            return delta_rot_1, delta_trans, delta_rot_2
                    
                

The above code shows the compute control function. Taking in the current and previous pose, we are able to determine the first rotation, translation and second rotation. We can find the first rotation by taking the arctangent of the difference in Y and X coordinates then normalizing. For translation, we can simply use the distance formula. Finally, we can find the second rotation by taking the difference between the current pose and the arctangent of the difference in Y and X coordinates.

Odom Motion Model

                    
                        def odom_motion_model(cur_pose, prev_pose, u):
                            """ Odometry Motion Model
                        
                            Args:
                                cur_pose  ([Pose]): Current Pose
                                prev_pose ([Pose]): Previous Pose
                                (rot1, trans, rot2) (float, float, float): A tuple with control data in the format 
                                                                        format (rot1, trans, rot2) with units (degrees, meters, degrees)
                        
                            
                            Returns:
                                prob [float]: Probability p(x'|x, u)
                            """
                            ctr = compute_control(cur_pose, prev_pose)
                        
                            pr_rot1 = loc.gaussian(u[0], ctr[0], loc.odom_rot_sigma)
                            pr_trans = loc.gaussian(u[1], ctr[1], loc.odom_trans_sigma)
                            pr_rot3 = loc.gaussian(u[2], ctr[2], loc.odom_rot_sigma)
                        
                        
                            prob = pr_rot1 * pr_trans * pr_rot3
                            
                            return prob
                    
                

The above code shows the odometry motion model. This function takes in the current and previous pose along with control data and returns the probability that we arrive at the current position given the previous position and control data. We can do this by using the Gaussian function to find the probability of each of the three components. We can then multiply these probabilities together to get the final probability. The sigma values signify our confidence in the odometry model.

Prediction Step

                        
                            def prediction_step(cur_odom, prev_odom):
                                """ Prediction step of the Bayes Filter.
                                Update the probabilities in loc.bel_bar based on loc.bel from the previous time step and the odometry motion model.

                                Args:
                                    cur_odom  ([Pose]): Current Pose
                                    prev_odom ([Pose]): Previous Pose
                                """
                                u = compute_control(cur_odom, prev_odom)
                                for x_last in range(mapper.MAX_CELLS_X):
                                    for y_last in range(mapper.MAX_CELLS_Y):
                                        for theta_last in range(mapper.MAX_CELLS_A):
                                            
                                            bel = loc.bel[(x_last,y_last,theta_last)]
                                            if(bel > 0.0001):
                                                for x in range(mapper.MAX_CELLS_X):
                                                    for y in range(mapper.MAX_CELLS_Y):
                                                        for theta in range(mapper.MAX_CELLS_A):
                                                            prob = odom_motion_model(mapper.from_map(x,y,theta), mapper.from_map(x_last,y_last,theta_last),u)
                                                            loc.bel_bar[(x,y,theta)] += prob * bel


                        
                    

The above code shows the prediction step of our filter. We loop through each grid cell and calculate the probability that the robot will move to another grid cell. Looking at Daoyuan Jin's (2024) code, I saw that he sped up the prediction by setting a belief threshhold of 0.0001. This cuts out many calculations and speeds up the filter.

Sensor Model

                    
                        def sensor_model(obs):
                            """ This is the equivalent of p(z|x).


                            Args:
                                obs ([ndarray]): A 1D array consisting of the true observations for a specific robot pose in the map 

                            Returns:
                                [ndarray]: Returns a 1D array of size 18 (=loc.OBS_PER_CELL) with the likelihoods of each individual sensor measurement
                            """
                            prob_array = np.zeros(mapper.OBS_PER_CELL)
                            for i in range(mapper.OBS_PER_CELL):
                                prob_array[i] = loc.gaussian(loc.obs_range_data[i], obs[i], loc.sensor_sigma)    

                            return prob_array
                    
                

We generate an array of probabilities corresponding to each sensor measurement. This allows us to assess the likeliness of sensors corresponding to the ground truth of the current pose on the grid

Update Step

                    
                        def update_step():
                            """ Update step of the Bayes Filter.
                            Update the probabilities in loc.bel based on loc.bel_bar and the sensor model.
                            """
                        
                            for x in range(mapper.MAX_CELLS_X):
                                for y in range(mapper.MAX_CELLS_Y):
                                    for theta in range(mapper.MAX_CELLS_A):
                                        loc.bel[(x,y,theta)] = np.prod(sensor_model(mapper.get_views(x,y,theta)) * loc.bel_bar[(x,y,theta)])
                        
                            loc.bel = loc.bel / np.sum(loc.bel)

                    
                

In our update step, we simply loop through each grid cell and update the belief based on the sensor model and prior belief. We then normalize the belief to ensure that underflow does not occur.

Results

The below plots show 2 runs of my simulation. Immedietly we can see that the Bayes filter gives a strong estimate of the robot's position. In my runs I notice that the filter is stronger when near the walls and around the box. I believe this is because there is more distinguishing characteristics which allows the filter to only believe the robot can be in a few places.

Bayes Filter Simulation Bayes Filter Simulation

The below video shows the simulation running.

I also got a log of the data from the simulation shown in the below:

                    
                    
----------------- 0 -----------------
2025-04-21 20:11:01,818 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:01,824 | INFO     |: GT index         : (6, 3, 7)
2025-04-21 20:11:01,825 | INFO     |: Prior Bel index  : (6, 4, 9) with prob = 0.5865045
2025-04-21 20:11:01,826 | INFO     |: POS ERROR        : (-0.013, -0.090, -49.725)
2025-04-21 20:11:01,828 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:05,582 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:05,600 | INFO     |: GT index      : (6, 3, 7)
2025-04-21 20:11:05,601 | INFO     |: Bel index     : (6, 4, 6) with prob = 0.6837452
2025-04-21 20:11:05,602 | INFO     |: Bel_bar prob at index = 0.001200863200335884
2025-04-21 20:11:05,602 | INFO     |: GT            : (0.291, -0.090, 320.275)
2025-04-21 20:11:05,602 | INFO     |: Belief        : (0.305, 0.000, -50.000)
2025-04-21 20:11:05,604 | INFO     |: POS ERROR     : (-0.013, -0.090, 370.275)
2025-04-21 20:11:05,606 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 1 -----------------
2025-04-21 20:11:07,745 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:07,756 | INFO     |: GT index         : (7, 2, 5)
2025-04-21 20:11:07,757 | INFO     |: Prior Bel index  : (6, 4, 9) with prob = 0.5865048
2025-04-21 20:11:07,758 | INFO     |: POS ERROR        : (0.210, -0.528, 286.975)
2025-04-21 20:11:07,760 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:11,136 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:11,139 | INFO     |: GT index      : (7, 2, 5)
2025-04-21 20:11:11,140 | INFO     |: Bel index     : (6, 2, 5) with prob = 0.9999999
2025-04-21 20:11:11,140 | INFO     |: Bel_bar prob at index = 0.000515075711575808
2025-04-21 20:11:11,141 | INFO     |: GT            : (0.514, -0.528, 656.975)
2025-04-21 20:11:11,142 | INFO     |: Belief        : (0.305, -0.610, -70.000)
2025-04-21 20:11:11,142 | INFO     |: POS ERROR     : (0.210, 0.081, 726.975)
2025-04-21 20:11:11,143 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 2 -----------------
2025-04-21 20:11:12,231 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:12,248 | INFO     |: GT index         : (7, 2, 4)
2025-04-21 20:11:12,249 | INFO     |: Prior Bel index  : (5, 2, 3) with prob = 0.6553140
2025-04-21 20:11:12,249 | INFO     |: POS ERROR        : (0.514, 0.081, 744.055)
2025-04-21 20:11:12,251 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:16,099 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:16,109 | INFO     |: GT index      : (7, 2, 4)
2025-04-21 20:11:16,110 | INFO     |: Bel index     : (6, 2, 4) with prob = 1.0
2025-04-21 20:11:16,111 | INFO     |: Bel_bar prob at index = 0.0005145065122474992
2025-04-21 20:11:16,111 | INFO     |: GT            : (0.514, -0.528, 994.055)
2025-04-21 20:11:16,112 | INFO     |: Belief        : (0.305, -0.610, -90.000)
2025-04-21 20:11:16,112 | INFO     |: POS ERROR     : (0.210, 0.081, 1084.055)
2025-04-21 20:11:16,113 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 3 -----------------
2025-04-21 20:11:17,258 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:17,266 | INFO     |: GT index         : (7, 0, 4)
2025-04-21 20:11:17,267 | INFO     |: Prior Bel index  : (4, 0, 4) with prob = 0.8595106
2025-04-21 20:11:17,268 | INFO     |: POS ERROR        : (0.847, 0.292, 1084.055)
2025-04-21 20:11:17,270 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:21,019 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:21,038 | INFO     |: GT index      : (7, 0, 4)
2025-04-21 20:11:21,039 | INFO     |: Bel index     : (7, 1, 4) with prob = 1.0
2025-04-21 20:11:21,040 | INFO     |: Bel_bar prob at index = 0.0005437912915977165
2025-04-21 20:11:21,040 | INFO     |: GT            : (0.543, -0.927, 1354.055)
2025-04-21 20:11:21,040 | INFO     |: Belief        : (0.610, -0.914, -90.000)
2025-04-21 20:11:21,041 | INFO     |: POS ERROR     : (-0.067, -0.013, 1444.055)
2025-04-21 20:11:21,042 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 4 -----------------
2025-04-21 20:11:24,150 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:24,166 | INFO     |: GT index         : (8, 0, 8)
2025-04-21 20:11:24,167 | INFO     |: Prior Bel index  : (4, 0, 4) with prob = 0.8595106
2025-04-21 20:11:24,167 | INFO     |: POS ERROR        : (1.115, 0.146, 1529.042)
2025-04-21 20:11:24,168 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:27,980 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:27,990 | INFO     |: GT index      : (8, 0, 8)
2025-04-21 20:11:27,991 | INFO     |: Bel index     : (8, 1, 9) with prob = 1.0
2025-04-21 20:11:27,992 | INFO     |: Bel_bar prob at index = 0.0005145616680653931
2025-04-21 20:11:27,992 | INFO     |: GT            : (0.810, -1.074, 1799.997)
2025-04-21 20:11:27,993 | INFO     |: Belief        : (0.914, -0.914, 10.000)
2025-04-21 20:11:27,995 | INFO     |: POS ERROR     : (-0.104, -0.159, 1789.997)
2025-04-21 20:11:27,995 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 5 -----------------
2025-04-21 20:11:34,112 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:34,126 | INFO     |: GT index         : (11, 1, 11)
2025-04-21 20:11:34,127 | INFO     |: Prior Bel index  : (4, 0, 4) with prob = 0.8595106
2025-04-21 20:11:34,130 | INFO     |: POS ERROR        : (1.899, 0.306, 1939.375)
2025-04-21 20:11:34,133 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:37,498 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:37,510 | INFO     |: GT index      : (11, 1, 11)
2025-04-21 20:11:37,511 | INFO     |: Bel index     : (9, 0, 10) with prob = 1.0
2025-04-21 20:11:37,511 | INFO     |: Bel_bar prob at index = 0.6456620603957459
2025-04-21 20:11:37,512 | INFO     |: GT            : (1.594, -0.913, 2209.375)
2025-04-21 20:11:37,513 | INFO     |: Belief        : (1.219, -1.219, 30.000)
2025-04-21 20:11:37,513 | INFO     |: POS ERROR     : (0.375, 0.306, 2179.375)
2025-04-21 20:11:37,514 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 6 -----------------
2025-04-21 20:11:39,679 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:39,695 | INFO     |: GT index         : (11, 2, 12)
2025-04-21 20:11:39,696 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:11:39,696 | INFO     |: POS ERROR        : (0.150, 0.698, 2188.507)
2025-04-21 20:11:39,697 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:43,445 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:43,454 | INFO     |: GT index      : (11, 2, 12)
2025-04-21 20:11:43,455 | INFO     |: Bel index     : (10, 2, 12) with prob = 0.9960824
2025-04-21 20:11:43,458 | INFO     |: Bel_bar prob at index = 0.0005167027382368399
2025-04-21 20:11:43,459 | INFO     |: GT            : (1.674, -0.521, 2598.506)
2025-04-21 20:11:43,460 | INFO     |: Belief        : (1.524, -0.610, 70.000)
2025-04-21 20:11:43,461 | INFO     |: POS ERROR     : (0.150, 0.088, 2528.506)
2025-04-21 20:11:43,463 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 7 -----------------
2025-04-21 20:11:45,563 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:45,567 | INFO     |: GT index         : (11, 3, 13)
2025-04-21 20:11:45,568 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:11:45,569 | INFO     |: POS ERROR        : (0.222, 1.051, 2554.237)
2025-04-21 20:11:45,570 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:48,918 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:48,929 | INFO     |: GT index      : (11, 3, 13)
2025-04-21 20:11:48,931 | INFO     |: Bel index     : (11, 3, 13) with prob = 1.0
2025-04-21 20:11:48,932 | INFO     |: Bel_bar prob at index = 0.0008547055251947457
2025-04-21 20:11:48,933 | INFO     |: GT            : (1.746, -0.169, 2964.333)
2025-04-21 20:11:48,933 | INFO     |: Belief        : (1.829, -0.305, 90.000)
2025-04-21 20:11:48,936 | INFO     |: POS ERROR     : (-0.083, 0.136, 2874.333)
2025-04-21 20:11:48,937 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 8 -----------------
2025-04-21 20:11:52,041 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:52,055 | INFO     |: GT index         : (11, 5, 14)
2025-04-21 20:11:52,056 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:11:52,057 | INFO     |: POS ERROR        : (0.220, 1.551, 2936.778)
2025-04-21 20:11:52,057 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:55,702 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:11:55,705 | INFO     |: GT index      : (11, 5, 14)
2025-04-21 20:11:55,706 | INFO     |: Bel index     : (11, 5, 14) with prob = 0.9998969
2025-04-21 20:11:55,708 | INFO     |: Bel_bar prob at index = 0.004918778369643369
2025-04-21 20:11:55,710 | INFO     |: GT            : (1.744, 0.331, 3347.351)
2025-04-21 20:11:55,711 | INFO     |: Belief        : (1.829, 0.305, 110.000)
2025-04-21 20:11:55,713 | INFO     |: POS ERROR     : (-0.085, 0.027, 3237.351)
2025-04-21 20:11:55,714 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 9 -----------------
2025-04-21 20:11:58,796 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:11:58,810 | INFO     |: GT index         : (11, 6, 16)
2025-04-21 20:11:58,812 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:11:58,813 | INFO     |: POS ERROR        : (0.219, 1.875, 3335.544)
2025-04-21 20:11:58,815 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:02,701 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:02,712 | INFO     |: GT index      : (11, 6, 16)
2025-04-21 20:12:02,712 | INFO     |: Bel index     : (11, 7, 16) with prob = 1.0
2025-04-21 20:12:02,713 | INFO     |: Bel_bar prob at index = 0.6496010397010737
2025-04-21 20:12:02,713 | INFO     |: GT            : (1.743, 0.656, 3746.499)
2025-04-21 20:12:02,714 | INFO     |: Belief        : (1.829, 0.914, 150.000)
2025-04-21 20:12:02,714 | INFO     |: POS ERROR     : (-0.086, -0.258, 3596.499)
2025-04-21 20:12:02,715 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 10 -----------------
2025-04-21 20:12:04,792 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:04,794 | INFO     |: GT index         : (10, 7, 16)
2025-04-21 20:12:04,794 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:12:04,795 | INFO     |: POS ERROR        : (-0.205, 2.156, 3707.578)
2025-04-21 20:12:04,795 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:08,219 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:08,235 | INFO     |: GT index      : (10, 7, 16)
2025-04-21 20:12:08,236 | INFO     |: Bel index     : (10, 7, 16) with prob = 1.0
2025-04-21 20:12:08,237 | INFO     |: Bel_bar prob at index = 0.2486745112916887
2025-04-21 20:12:08,238 | INFO     |: GT            : (1.319, 0.937, 4117.770)
2025-04-21 20:12:08,239 | INFO     |: Belief        : (1.524, 0.914, 150.000)
2025-04-21 20:12:08,240 | INFO     |: POS ERROR     : (-0.205, 0.022, 3967.770)
2025-04-21 20:12:08,243 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 11 -----------------
2025-04-21 20:12:11,397 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:11,412 | INFO     |: GT index         : (7, 6, 3)
2025-04-21 20:12:11,412 | INFO     |: Prior Bel index  : (10, 0, 11) with prob = 1.0876907
2025-04-21 20:12:11,413 | INFO     |: POS ERROR        : (-1.099, 2.055, 4164.008)
2025-04-21 20:12:11,414 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:15,147 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:15,160 | INFO     |: GT index      : (7, 6, 3)
2025-04-21 20:12:15,161 | INFO     |: Bel index     : (7, 7, 3) with prob = 0.9999987
2025-04-21 20:12:15,164 | INFO     |: Bel_bar prob at index = 0.09067102063965905
2025-04-21 20:12:15,166 | INFO     |: GT            : (0.425, 0.836, 4574.008)
2025-04-21 20:12:15,167 | INFO     |: Belief        : (0.610, 0.914, -110.000)
2025-04-21 20:12:15,168 | INFO     |: POS ERROR     : (-0.185, -0.078, 4684.008)
2025-04-21 20:12:15,169 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 12 -----------------
2025-04-21 20:12:17,244 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:17,259 | INFO     |: GT index         : (6, 4, 5)
2025-04-21 20:12:17,260 | INFO     |: Prior Bel index  : (8, 5, 6) with prob = 1.1902763
2025-04-21 20:12:17,261 | INFO     |: POS ERROR        : (-0.668, -0.094, 4669.855)
2025-04-21 20:12:17,262 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:20,916 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:20,921 | INFO     |: GT index      : (6, 4, 5)
2025-04-21 20:12:20,922 | INFO     |: Bel index     : (6, 5, 5) with prob = 1.0
2025-04-21 20:12:20,923 | INFO     |: Bel_bar prob at index = 0.38517530898292607
2025-04-21 20:12:20,924 | INFO     |: GT            : (0.246, 0.211, 4979.855)
2025-04-21 20:12:20,925 | INFO     |: Belief        : (0.305, 0.305, -70.000)
2025-04-21 20:12:20,926 | INFO     |: POS ERROR     : (-0.059, -0.094, 5049.855)
2025-04-21 20:12:20,927 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 13 -----------------
2025-04-21 20:12:23,024 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:23,041 | INFO     |: GT index         : (5, 3, 2)
2025-04-21 20:12:23,042 | INFO     |: Prior Bel index  : (8, 5, 6) with prob = 1.1902763
2025-04-21 20:12:23,043 | INFO     |: POS ERROR        : (-0.920, -0.405, 4961.111)
2025-04-21 20:12:23,044 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:26,763 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:26,783 | INFO     |: GT index      : (5, 3, 2)
2025-04-21 20:12:26,784 | INFO     |: Bel index     : (5, 3, 2) with prob = 1.0
2025-04-21 20:12:26,785 | INFO     |: Bel_bar prob at index = 0.0019710467686404097
2025-04-21 20:12:26,785 | INFO     |: GT            : (-0.005, -0.100, 5271.112)
2025-04-21 20:12:26,786 | INFO     |: Belief        : (0.000, -0.305, -130.000)
2025-04-21 20:12:26,786 | INFO     |: POS ERROR     : (-0.005, 0.205, 5401.112)
2025-04-21 20:12:26,787 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 14 -----------------
2025-04-21 20:12:29,911 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:29,928 | INFO     |: GT index         : (4, 3, 1)
2025-04-21 20:12:29,929 | INFO     |: Prior Bel index  : (4, 0, 4) with prob = 1.3597352
2025-04-21 20:12:29,930 | INFO     |: POS ERROR        : (-0.070, 0.966, 5338.188)
2025-04-21 20:12:29,933 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:33,316 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:33,318 | INFO     |: GT index      : (4, 3, 1)
2025-04-21 20:12:33,319 | INFO     |: Bel index     : (4, 3, 1) with prob = 1.0
2025-04-21 20:12:33,319 | INFO     |: Bel_bar prob at index = 0.010819176195519349
2025-04-21 20:12:33,320 | INFO     |: GT            : (-0.375, -0.253, 5608.189)
2025-04-21 20:12:33,320 | INFO     |: Belief        : (-0.305, -0.305, -150.000)
2025-04-21 20:12:33,321 | INFO     |: POS ERROR     : (-0.070, 0.052, 5758.189)
2025-04-21 20:12:33,321 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


----------------- 15 -----------------
2025-04-21 20:12:36,407 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:36,425 | INFO     |: GT index         : (3, 3, 0)
2025-04-21 20:12:36,426 | INFO     |: Prior Bel index  : (4, 0, 4) with prob = 1.3597639
2025-04-21 20:12:36,426 | INFO     |: POS ERROR        : (-0.463, 0.969, 5675.265)
2025-04-21 20:12:36,427 | INFO     |: ---------- PREDICTION STATS -----------
2025-04-21 20:12:40,311 | INFO     |: ---------- UPDATE STATS -----------
2025-04-21 20:12:40,325 | INFO     |: GT index      : (3, 3, 0)
2025-04-21 20:12:40,326 | INFO     |: Bel index     : (2, 3, 0) with prob = 0.9999858
2025-04-21 20:12:40,326 | INFO     |: Bel_bar prob at index = 0.1685306356398619
2025-04-21 20:12:40,327 | INFO     |: GT            : (-0.768, -0.250, 5945.265)
2025-04-21 20:12:40,329 | INFO     |: Belief        : (-0.914, -0.305, -170.000)
2025-04-21 20:12:40,331 | INFO     |: POS ERROR     : (0.146, 0.055, 6115.265)
2025-04-21 20:12:40,332 | INFO     |: ---------- UPDATE STATS -----------
-------------------------------------


                    
                    

Lab 11

In this lab, we take our simulated localization and apply it to the real robot. To start, we verify that our Bayes fitler is working correctly by running it on the simulated robot. The following screenshot shows a simulated run in the maze. While the odometry is not perfect, the belief (blue) is able to follow the ground truth (green) well:

Bayes Filter Simulation

For this lab, we take the robot and perform the same scan we did in Lab 9. We do this in a set of marked spots and then run the Bayes filter on the data to get an estimate of the robots position. On the arduino side, we can reuse most of the code except for a few changes. First, we change the format of the data we send back. We only want to return the ToF and yaw values. We also change the amount of scans to 18 for 20 degree increments. The workflow for this lab was as follows: Place the robot in a marked spot, run the scan, and then run the Bayes filter on the data, reset the ToF and yaw arrays.

                    
                        case LOCALIZE:
                        {
                          int curr_pwm = 0;
                          int setpoint = yaw;
                          int start_time = millis();
                          int scans = 0;
                          int i = 0;
                  
                          while (scans < 18) {
                            poll_dmt();
                            start_time = millis();
                            while (millis() - start_time < 1000) {
                              curr_pwm = PID_Turn(yaw, setpoint);
                              poll_dmt();
                            }
                            diff_drive(0, 0);
                            int dist = 0;
                            for (int i = 0; i < 3; i++) {
                              while (!distanceSensor.checkForDataReady()) {
                                delay(1);
                              }
                              dist += distanceSensor.getDistance();
                            }
                  
                            dist /= 3;
                  
                            if (i < max_arr) {
                              yaw_arr[i] = yaw;
                              tof_ranges[i] = dist;
                              i++;
                            }
                  
                  
                            setpoint = (setpoint + 20) % 360;
                            integral = 0.0;  // reset integral term for next pid turn
                            scans++;
                          }
                  
                  
                          Serial.println("End Scan drive");
                  
                  
                          Serial.println("Sending Back");
                          char temp[20];  // Temporary buffer to hold the string
                  
                          for (int j = 0; j < i; j++) {
                            tx_estring_value.clear();
                  
                            tx_estring_value.append((float)tof_ranges[j]);
                  
                            tx_estring_value.append("|");
                            tx_estring_value.append((float)yaw_arr[j]);
                  
                            tx_characteristic_string.writeValue(tx_estring_value.c_str());
                          }
                          Serial.println("Finish send");
                          break;
                        }
                  
                    
                    

On the python side, we perform the observation loop with the below function. First we reset our arrays then start a new notification handler. We can then send our localization command which will start the scan. After I measured the time it took to do a full scan, I set the code to sleep for 35 seconds to wait for completion. I then need to reverse the arrays as my scan performs a 360 degree turn in the opposite direction than is expected by the filter. After normalizing the yaws and ToFs to 360 degrees and meters respectively, we return the arrays.

                    
                        async def perform_observation_loop(self, rot_vel=120):
                        """Perform the observation loop behavior on the real robot, where the robot does  
                        a 360 degree turn in place while collecting equidistant (in the angular space) sensor
                        readings, with the first sensor reading taken at the robot's current heading. 
                        The number of sensor readings depends on "observations_count"(=18) defined in world.yaml.
                        
                        Keyword arguments:
                            rot_vel -- (Optional) Angular Velocity for loop (degrees/second)
                                        Do not remove this parameter from the function definition, even if you don't use it.
                        Returns:
                            sensor_ranges   -- A column numpy array of the range values (meters)
                            sensor_bearings -- A column numpy array of the bearings at which the sensor readings were taken (degrees)
                                               The bearing values are not used in the Localization module, so you may return a empty numpy array
                        """
                        tofs = []
                        yaws = []
                
                        def notification_handler_temps(uiud, byte_array):
                            s = ble.bytearray_to_string(byte_array)
                            temp_arr = s.split("|")
                            tofs.append(temp_arr[0])
                            yaws.append(temp_arr[1])
                            
                        ble.start_notify(ble.uuid['RX_STRING'], notification_handler_temps)
                
                        ble.send_command(CMD.LOCALIZE, '')
                
                        await asyncio.sleep(35)
                
                            
                        ble.stop_notify(ble.uuid['RX_STRING'])
                
                        tof_f = np.array(tofs[::-1], dtype=float) # reverse tof array as we are turning the opposite way
                        yaw_f = np.array(yaws[::-1], dtype=float)
                        yaw_normalized = 360 - ((yaw_f - yaw_f[0]) % 360)
                        
                        sensor_ranges = np.array(tof_f / 1000)[np.newaxis].T
                        sensor_bearings = np.array(yaw_normalized)[np.newaxis].T
                
                        print(sensor_bearings)
                        print(sensor_ranges)
                
                        return sensor_ranges, sensor_bearings
                    
                    

Results

After testing, I was able to find the localization beliefs of the robot in 4 positions in the maze. The green represents the GT and the blue represents the belief of the robot.

Point: (0,3) Bayes Filter Simulation Bayes Filter Simulation Point: (-3,-2) Bayes Filter Simulation Bayes Filter Simulation Point: (5,-3) Bayes Filter Simulation Bayes Filter Simulation Point: (5,3) Bayes Filter Simulation Bayes Filter Simulation

Discussion

Overall, the robot was able to localize itself well in the maze. Initially, I had a lot of trouble because I had flipped the rotation of the robot however I resolved this by reversing the arrayys. In addition, I also had to increase the sensor sigma. Interestingly, the robot is able to localize itself extremely well at point (-3,-2) however was slightly off at the other points. After reflecting on my code, I realized that I had a slight issue with flipping the yaws. Because the robot starts measuring at 0 degrees, the last value in the array (which would be the first value in the Bayes filter) is actually at 20 degrees. This would cause the robot to be slightly off. I believe that I inadvertantly fixed this for (-3,-2) as I has misaligned the robot a bit.

Lab 12

In this lab, we attempt to navigate the robot through a path utilizing the PID control and localization we have developed in the previous labs. The goal is to have the robot navigate through a set of waypoints in an environment. We start by developing a turn-go-turn method with PID controlled turns and drives.

PID Control

In order to utilize a turn-go-turn method, we need to have reliable PID control for the robots orientation and distance. For the orientation, we utilize the same PID_Turn function we developed in the previous lab. This function utilizes the DMP onboard the robot to get the yaw. For the distance, we utilize PID control on the ToF sensor. In order to chain these, we do this on the python side, sending BLE commands back to back. We perform this on the python side as it is easier to quickly change the parameters and test different values without having to reflash.

                    
                        # Drive Chain
                        ble.send_command(CMD.DRIVE, "600") 
                        ble.send_command(CMD.TURN, "95") 
                    
                

CMD.DRIVE

The CMD.DRIVE command is used to drive the robot forward to a specified distance from the wall using PID. This distance is measured by the ToF sensor in mm.

CMD.TURN

The CMD.TURN command is used to turn the robot to a specified angle using PID. This angle is measured by the DMP onboard the IMU.

Open Loop Navigation

The below video shows one of our first successful runs with a chain of drive and turn commands. While this was very fast, the robot drifts a ton and it was unable to hit the specified waypoints well.

After this, we needed to change a couple things. First, we added a delay in between sending BLE commands. This ensured that the robot comes to a complete stop before the next command is sent. This will help us mitigate the drift we see in the robot.

                    
                        # Drive Chain
                        ble.send_command(CMD.DRIVE, "600") 
                        await asyncio.sleep(6)
                        ble.send_command(CMD.TURN, "95") 
                    
                

We also spent significant time tuning the PID parameters for the drive and turn commands. For the drive command, we noticed heavy overshoot so we detuned the P term. In addition, we also added 2 new tunable parameters to the drive tuning: max PWM and drive differential. Because small errors significantly accumulate, we needed to add as many tunable parameters to ensure accurate driving. The max PWM is the PWM cap of the motors to help mitigate overshoot and the drive differential is the difference between left and right motors to prevent side to side drift when driving in a straight line. For the turn command, we added an extra floor PWM parameter. This ensures that the robot will always turn at least at some rate. The below shows our open loop drive chain. Each of the distances were measured to place the robot roughly at the waypoints.

                    
                        ble.send_command(CMD.TUNE_PID, "0.009|0.00|0.0|135|1.3") # P I D max_pwm drive_diff
                        ble.send_command(CMD.TUNE_TURN, "0.9|0.0|0.1|75") # P I D floor_pwm

                        # Drive Chain

                        ble.send_command(CMD.DRIVE, "600") 
                        await asyncio.sleep(6)

                        ble.send_command(CMD.TURN, "90") 

                        await asyncio.sleep(3)


                        ble.send_command(CMD.DRIVE, "1500") # first long
                        await asyncio.sleep(7)

                        ble.send_command(CMD.TURN, "180") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "450") 
                        await asyncio.sleep(7)

                        ble.send_command(CMD.TURN, "90") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "300") # Turn before back straight
                        await asyncio.sleep(7) 

                        ble.send_command(CMD.TURN, "0") 
                        await asyncio.sleep(3)


                        ble.send_command(CMD.DRIVE, "450") # Back straight
                        await asyncio.sleep(6)


                        ble.send_command(CMD.TURN, "270") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "700") # Round the final

                        await asyncio.sleep(7)
                        ble.send_command(CMD.TURN, "180") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "600") # End

                    
                

After tuning the parameters, we were able to get a much more reliable drive and turn that hits the waypoints. The below video shows a successful run of the robot navigating through a path with the tuned parameters. Unfortunately we miss the penultimate waypoint by a bit. This is because we suffered a bit of overshoot on the last turn.

Localization Navigation

Once we had an open loop run, we wanted to utilize localization to correct for errors in our position. We first need to implement a function to localize the robot and return the current belief. We then take this belief and use it to adjust our position. The below localize_point function will return the current belief.

                    
                        async def localize_point():
                            # Get Observation Data by executing a 360 degree rotation motion
                            ble.send_command(CMD.TURN, "270")
                            ble.send_command(CMD.TUNE_TURN, "6.5|0.0|0.007|35") # best with a bit of steady state
                            await loc.get_observation_data()

                            
                            # Run Update Step
                            loc.update_step()
                            loc.plot_update_step_data(plot_data=True)
                            
                            max_bel_idx = get_max(loc.bel)
                            curr_pose = mapper.from_map(*max_bel_idx[0])
                            ble.send_command(CMD.TUNE_TURN, "0.9|0.0|0.1|75") # best with a bit of steady state

                            #return curr_pose 
                            ft_pose = np.rint(np.array(curr_pose) * 3.28084).astype(int)
                            return ft_pose
                    
                

This function performs a couple things to reliably get a localization. First, we turn the robot to 270 degrees (which in our reference frame is defined at the proper orientation for localization). With this, we then change our PID turn parameters at runtime. The reason for this is because our PID turn parameters worked very well for the localization lab. Instead of spending more time tuning the parameters to work for both large 90 degree turns and small 20 degree turns, we simply change the parameters at runtime when we want to localize. We run the scan and filter to get the belief of the current position in python and return the current position in feet. We also retune the PID turn parameters to our normal values. While this belief won't give us the exact position of the robot, we will be able to confirm if we are in the right grid space.

Nav Diagram
Flowchart for localization.

We can implement the new navigation chain with localization and correction following the figure above. We choose to localize at 2 points: (2,-3) and (0,0). This is because these were the locations we had the most errors reaching in our testing. Once the drive chain is expected to be at (2,-3), we localize the position. If the position is not at (2,-3), we realign the robot by turning towards the wall and realigning the ToF distances. We perform the same process at the origin as well. While we could have localized again after realignment the robot to confirm the realignment was successful, we decided not to as we wanted to keep the run time low. In addition, we could have localized at every point however we wanted to save time. The below shows the drive chain with localization and realignment.

                    
                        # Drive Chain

                        ble.send_command(CMD.DRIVE, "600") 
                        await asyncio.sleep(6)

                        ble.send_command(CMD.TURN, "95") 


                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "1500") # first long
                        await asyncio.sleep(7)

                        ble.send_command(CMD.TURN, "180") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "450") 
                        await asyncio.sleep(7)

                        ####### LOCALIZE AT POINT 2,-3 ###########

                        estimated_two_neg_3 = await localize_point()

                        if estimated_two_neg_3[0] != 2 or estimated_two_neg_3[1] != -3: # If not at 2,-3, drive the robot to 2,-3
                            # X realignment
                            ble.send_command(CMD.TURN, "90") 
                            await asyncio.sleep(3)
                            ble.send_command(CMD.DRIVE, "1200") 
                            
                            # Y realignment
                            ble.send_command(CMD.TURN, "180")
                            await asyncio.sleep(3)
                            ble.send_command(CMD.DRIVE, "450") 


                        ######## END LOCALIZE ########

                        ble.send_command(CMD.TURN, "90") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "300") # Turn before back straight
                        await asyncio.sleep(7) 

                        ble.send_command(CMD.TURN, "0") 
                        await asyncio.sleep(3)


                        ble.send_command(CMD.DRIVE, "450") # Back straight
                        await asyncio.sleep(6)


                        ble.send_command(CMD.TURN, "270") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "700") # Round the final

                        await asyncio.sleep(7)
                        ble.send_command(CMD.TURN, "180") 
                        await asyncio.sleep(3)

                        ble.send_command(CMD.DRIVE, "600") # End

                        ####### LOCALIZE AT Origin ###########

                        estimated_origin = await localize_point()

                        if estimated_origin[0] != 0 or estimated_origin[1] != 0: # If not at 0,0, we drive the robot to 0,0
                            # X realignment
                            ble.send_command(CMD.TURN, "90") 
                            await asyncio.sleep(3)
                            ble.send_command(CMD.DRIVE, "600") 

                            # Y realignment
                            ble.send_command(CMD.TURN, "180")
                            await asyncio.sleep(3)
                            ble.send_command(CMD.DRIVE, "650") 


                        ######## END LOCALIZE ########

                        
                    
                

After implementing this, we were able to get our best run.

Looking at the data, we can see the beliefs of the 2 positions we localize at. In this run, we notice that when we localize at (2,-3), the localization tells us that we are at the point. We do not have to realign. At the origin however, the belief is at (1,0) so we must realign the robot. Looking at the video, robot wasn't exactly off the origin however the realignment would not harm the robots position as we are just rerunning our PID drive to align us at the proper distance from the wall.

Localization at (2,-3)

Plotted Localization Points

Localization at (0,0)

Calculated Beliefs

Discussion

DMP drift

An issue we had with the DMP was that it would drift especially on startup. While normally this would not be a significant issue, even slight amounts of drift would cause significant changes in course when driving the long straights.

ToF Struggles

A major struggle we had was with the ToF sensor. Very often when pointing over long ranges it would pick up the ground or go base the back wall. To mitigate this, we remounted the ToF sensor to be flush with the car. In addition, we changed the ToF mode to long range mode.

Overall, we are happy with the results of this lab. We were able to get solid open loop and localization runs, despite the localization not being exactly in the right spot for the origin. We were able to demonstrate that the localization can be used to realign the robot position.

Collaboration

Sean Zhang and I worked together on this lab.