DIY 3D Scanner Via Inverse Square Law

DIY 3D Scanner Via Inverse Square Law

When you look around the market, there are many different types of 3D scanners and 3D sensors out there. A quick look at PCL’s IO library lists a number of different sensors out there.  Microsoft has their Kinect which use Time Of Flight to measure depth. The PR2 robot uses projected stereo mapping (i.e. It uses a projector to project a pattern onto the floor then it uses visual cues from the projected pattern to reconstruct the depth of the pattern). Then there are laser range finders which use rotating mirrors to scan depth. Some sensors use multiple camera like we use our two eyes to estimate depth. All of these methods have their own problems. Measuring time of flight is challenging and expensive, rotating mirrors with laser are slow and unsafe, stereoscopic methods are still quite inaccurate. Enter 3D Scanning Via Inverse Square Law. This is a new technique that I have hacked together using one LED and a camera. It is somewhat like projected stereo mapping but simpler and less computationally expensive.

Theory

So how does it work? In a traditional time of flight imaging system a pulse of light is sent out and the time it takes for the reflection to come back is used to gauge the distance. Light is very fast so the time difference for most objects is so minuscule that it makes the electronics very complicated. There is something else that changes with time over a distance – Power. The further we move away from a light source the less light we receive. This is characterised by the inverse square law.

 P_{detected} = \frac{P_{source}}{r^2}

Since power is very easy to measure, in theory it should be quite easy to measure the distance from the source using this technique. The challenge arises because the brightness of the source is not constant. Not all materials reflect light in the same way. If we were to imagine a set up like the one below:
untitled-diagram

Let us assume that the light source emits light at a power of L.

Then the object receives  \frac{L}{P^2}  amount of light.

If its albedo is a then the light reflected back is:

\frac{aL}{P^2}

According to the inverse square law the light that reaches the camera is:

\frac{aL}{Q^2P^2}

In most cases the P \approx Q. Therefore the above equation can be rewritten to:

\frac{aL}{P^4}

Now we can control L by modulating the brightness of the light source. Thus we are left with a system of simultaneous equations:

\frac{aL_1}{P^4}

\frac{aL_2}{P^4}

These equations can be solved quite easily by using images of two different brightness. If we solve for P we will get the distance between the object and the camera.

Practical Implementation

In theory this all looks very good so how do we actually implement it? My hardware looks like this:imag0504

As you can see theres a blue LED, an Arduino and Logitech webcam. The arduino controls the brightness of the LED and the webcam observes. The code for the arduino is as follows:

https://create.arduino.cc/editor/arjo129/b42a42e1-00fd-499b-bd9f-c1915d9eea61/preview?embed

I know theres an analog write function but I decided to be fancy about it and do the timing myself.

The app itself is available on github: https://github.com/arjo129/LightModulated3DScan

It is written in C++ using the Qt framework and OpenCV (I’ve hard coded the path to opencv, you’ll need to change it). Theres still a lot of work to be done to get it into production quality.

Here is my first scan:

You can see the object has been digitised quite well.

Here is another scan of my mom’s glasses and a tissue box in the backdrop. The actual scan was done in broad daylight.

test

The code isn’t perfect. It currently uses unsigned chars to deal with the computation. This leads to the problem of saturation. You can see this problem on the hinge of the glasses because of the proximity of the glasses. Another image which makes this problem even clearer is this one:

test-1

Here you can see that the parts nearest the light source and with the highest albedo are over saturated. This problem would be solved if I transitioned the code to floating point arithmetic as opposed to unsigned 8 byte integers.

Expanding Vocabulary Capabilities of µspeech

In order to take into account the varying complexities of vocabulary used we looked at offline algorithms for skewness. Now I propose a simpler method. Skewness requires a very large number of multiplications and divisions to be performed. This makes it unsuitable for quick calculations on an AVR as the AVR architecture used by an arduino does not contain a multiply instruction, so multiplication is implemented by software not hardware. In order to speed the process up, I came up with another simple yet effective (but not reliable) measure of skewness.

We treat every utterance (syllable) as a probability distribution of individual phonemes. To test the order of which phoneme comes first we record the number of phonemes per unit time. So every 16 cycles we sample the number of phonemes, and from that one keeps a catalog of statistical features

Skewness can be looked at graphically:

Figure 1. Skewness of a unimodal distribution

As one can see the mode of a negatively skewed distribution is to the right of the center of the distribution. Thus the mode – median is positive. This would be opposite for a positively skewed graph. So µSpeech uses this property to determine the order of a sound in an utterance.

Regarding µSpeech 4.1.2+

Dear Users of µSpeech library,
With the release of the 4.2.alpha library I have been reciveing a lot of mail as to how the debug µSpeech program does not work. I sincerely, apologize for this inconvenience as I have very little time on hand to address the issues associated with 4.1.2+ and 4.2.alpha. I realized that I had made the error of not flagging 4.1.2 as a pre-release: I have fixed this now. I also realized that the documentation that I have on youtube is back dated I will be addressing this as soon as possible.

If anyone has the time and is willing to look into why debug_uspeech is giving trouble, please feel free to create a pull request on github. Due to the fact that I am currently going through my school finals, I am unable to devote the time required to address this issue. Please revert back to µSpeech 4.1.1.

Sincere Apologies,

Arjo Chakravarty

 

P.S. If you wish to play with 4.2.alpha then there is a copy of the original debug uSpeech here: https://gist.github.com/arjo129/9c73ec533b2b1a6f91b4/

Wiring two Arduino’s to do your bidding (using I2C)

Ok, so making rovers with one Arduino is so overdone. What if you had two rovers that worked together to solve problems. Or even turn. This can be fairly challenging. In order to achieve communication between to Arduino’s, one can use the two wire interface, known as I2C. I2C is a method of communicating between the Arduino’s. One can connect 100 Arduino’s all on the same line to make a super-duino, but for now lets keep it down just to two.

Many peripherals can be attached to your Arduio (µC) via this protocol. One Arduino acts as the all powerful overlord (a.k.a master).

Schematic of generic I2C circuit (courtesy wikipedia)

Lets put this in context of two Arduino’s. Here’s how you wire them:

Schematic for your Arduino’s, squiggly lines correspond to resistors (courtesy instructables).

So one Arduino will be the master and one the slave. It’s up to you which one you decide will do what – however the master is programmed separate from the slave. So you will need two programs: one for master, and one for slave.

Now for the code. The actual I2C protocol is fairly complex, however the folks at Arduino have simplified the process significantly. They do this through the use of a library. A library stores extra code which may be useful in order to simplify your life. The library it is stored in is known as Wire, so at the top of both the programs attach:

#include <Wire.h>

Now lets deal with the commands that can be sent: left, right, stop and reverse. We can use something known as preprocessor directives to handle these commands. Again at the top of both the master and slave copy and paste the following:

#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

Now we will have to deal with the slave differently. Because I2C supports multiple devices, each slave device has its own address (a number that identifies it anywhere between 1-127). So in the void setup() function the following needs to be placed:

void setup(){
  ... //some of your code
   Wire.begin(6); 
}

You have declared your slave as having the address 6. This is kind of like the IP address of a computer, or the URL which points you to this website. Next you have to create a function which responds when the master comes around to command the slave.

void setup(){
  ... //some of your code
   Wire.begin(6);
   Wire.onReceive(followCommand); 
}

We have not yet defined followCommand, the function that will follow the master’s orders, but we shall implement it now (full code for slave listed):

#include <Wire.h>
#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

void followCommand(int command){
   //what your robot will do
   if(command == LEFT){
     //Write code for turning your robot left
   }
   if(command == RIGHT){
     //Write code for turning your robot right
   }
   //You get the idea...
}
void setup(){
   //some of your code
   Wire.begin(6);
   Wire.onReceive(followCommand); 
}
void loop(){
}

We have now implemented a slave, but we need to implement a master. This is easier. The master has no address so the setup would look like this:

void setup(){
  //Your code...
  Wire.begin();
}

To send a signal the following commands can be used:

  Wire.beginTransmission(6); // transmit to device #6
  Wire.send(LEFT);              // sends LEFT Change to whatever
  Wire.endTransmission();

So the master program would look something like this:

#include <Wire.h>
#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

void setup(){
   //some of your code
   Wire.begin(); 
}
void loop(){
  //Some code
  //Suddenly you want to transmit a message to make the slave turn left
  Wire.beginTransmission(6); // transmit to device #6
  Wire.send(LEFT);              // sends LEFT Change to whatever
  Wire.endTransmission();
  //Some more code
}

Upload the master program to the master Arduino and the slave program to the slave Arduino and you are ready to go!

Creative Coding Quick Reference Sheet

In this page we shall begin to program our robot. To understand how to program we first need to know how a computer works. A computer understands nothing. It is up to the programmer to tell it what to do. At the heart of the computer is a CPU. The CPU translates a piece of machine code into a series of mathematical operations it performs on its inputs. When we program, we program in a structured language such as C++ or Java which is then translated by a program known as the compiler into machine code. The machine code is interpreted by a CPU and translated into the mathematical operations. Java is a little special in this aspect as when you compile a piece of Java it gets converted to Java byte code. The program/application file gets interpreted when a user runs the application through a program known as the Java Virtual Machine. The Java VM translates byte-code to machine code on the fly.

Key components of a programming language

Programming languages are the way we instruct our computers. In this club we will use JAVA and C++. For our purposes, Java is used by the processing application on our desktop and the mobile phone and C++ is used by the Arduino micro-controller. The two languages have their similarities in terms of the way we express ourselves. Most other programming languages follow a similar pattern. Click on the tables to enlarge them.

Screen Shot 2014-03-06 at 7.54.49 PM

Screen Shot 2014-03-06 at 7.55.42 PM

The semicolon (;) and other grammar

In both Java and C++ one must put a semicolon at the end of every statement made. Its basically like a full stop. Both languages also ignore spaces.

Functions

Computers are stupid, so apart from the table above the computer understands nothing else. Thus there is no such thing as special words that mean something. What programmers have done is created their own words known as functions.

In C++(arduino) a function looks like this:

void left(){
     \\tell your robot how to turn left
}

One would invoke the function:

    left();

A function can take an input and give an output:

    int add1(x){
       return x+1;
    }

One can call this by

add1(1);

To handle output one would create a variable to store the data returned.

int val = add1(1); //val will equal 1+1 = 2

Easy.

Now how do you use these? Well, generally the Arduino and Processing environment come with what is known as a library. The libraries provide a list of functions to make life easier (otherwise you would have to write a whole operating system, or tell the arduino how to turn on a pin which is fairly nasty compared to digitalWrite(9,HIGH);

Where to go from here

http://processing.org/learning/objects/

http://processing.org/learning/pixels/

http://arduino.cc/en/Tutorial/AnalogReadSerial

http://arduino.cc/en/Tutorial/DigitalReadSerial

Using skewness to perform syllable recognition – Part I (Theory)

Currently the syllable class contains an accumulator for various letters. If these letters exist it corresponds to a special syllable. I however think that this is fairly limited because “fish” and “shift” will be interpreted as the same word. Yet, µSpeech should be capable of better: we are able to tell when an individual letter has been said. For instance the occurrence of “f” in fish should occur more at the beginning, where as the occurrence of “f” in “shift” should occur at the end.

For solving this problem I have been considering two methods: The skewness and a method based on pure calculus. Today I will explore Skewness.

Skewness – The theory

Skewness is a measure of how something leans to one side. Its statistical definition is:

\frac{\mu_3}{\sigma^3}

Where \mu_3 is the 3rd moment of the mean i.e E[(x-\mu)^3].

Now going back to High school mathematics:

E[(x-\mu)^3] = E[x^3 - 3x^2\mu + 3x\mu^2 - \mu^3] = E[x^3] - 3\mu E[x^2] + 3\mu^2 E[x] - \mu^3

Given that \mu = E[x]:

E[(x-\mu)^3] = E[x^3] - 3\mu (E[x^2] - \mu^2) - \mu^3

Since \sigma^2 = E[x^2]-\mu^2:

E[(x-\mu)^3] = E[x^3] - 3\mu \sigma^2 - \mu^3

This results in the necessity for the algorithm to compute 3 variables: E[x^3], \mu, \sigma^2

Making the algorithm online

Now given that µSpeech has stringent memory requirements it seems imperative that we devise the algorithm so that the following can be done:

  • Data is not kept in an array.
  • Computation is minimal.

Thus keeping these in mind it seems necessary to look at ways of computing the three variables online. To start with lets tackle the simplest algorithm, the one for µ.

Given that \mu = E[x] = \sum n_i p_i = \sum \frac{x_i}{n}, one can write an algorithm as such:

\mu_i = \mu_{i-1} + \frac{x_i - \mu_{i-1}}{i}

This can be extended to E[x^3]:

E[x^3]_i = E[x^3]_{i-1} +\frac{x^3_i-E[x^3]_{i-1}}{i}

The third value which we need to compute \sigma^2.

\sigma^2 = E[x^2]-E[x]^2

So we need to find E[x^2]:

E[x^2]_i = E[x^2]_{i-1} +\frac{x^2_i-E[x^2]_{i-1}}{i}

Part 2 coming soon.

µSpeech 4.0 coming soon with codebender.cc support

The µSpeech library is almost ready to be deployed as version 4.0. A number of bug fixes have been performed and new features have been added. Among the new features is a way to store and compare words. It also makes it significantly easier to calibrate the library. Among other things I have augmented the documentation with a video for calibrating the latest version of the library. If you are interested in trying it before hand:


git clone -b 4.0-workingBranch https://github.com/arjo129/uSpeech.git

UPDATE: 4.0 is now mainstream, just go to the downloads page to download the latest version.

This is how to calibrate the phoneme recognizer:

The new API Docs will be coming up soon. I am in the process of getting it updated at codebender.cc so those of you who use it to program your arduino can enjoy the benefits of µSpeech

Complete list of changes:

  • Update code bender support.
  • Fix vowel detection.
  • Package in Arduino IDE friendly format.
  • Video documentation.
  • Improve ease of use.
  • New API for easy word recognition