# Designing and Building an ADC for the Raspberry Pi

Raspberry Pis have become a rage wherever one needs to add some electronics to a project, yet they are not a very good tool for controlling electronics. It comes with few 3v3 digital GPIO pins. Now say for instance you want to hook on a potentiometer to the Pi how would you do it? The answer is normally to use an external ADC. There are numerous prebuilt ADCs out there. I however decided to try to build my own ADC.

There are numerous ADC designs out there, however when designing such a device one must take into account a few things:

• Sample rate
• Hardware complexity
• Accuracy
• Resolution

For my home made ADC accuracy and resolution are not the highest priority – I just want to be able to read the value of a Potentiometer. Hardware complexity is very important though. Ideally I should require a minimal number of components to build this ADC.

The central component of an ADC is the comparator. The comparator is a circuit that, as its name suggests,  compares two voltages.  The circuit symbol for the comparator is: A comparator

Basically if V1 > V2, Vout will be High otherwise Vout is low. One of the techniques used known as the Flash ADC takes multiple such comparators with each ones voltage held at a different level to determine the incoming voltage. However, the Flash ADC requires many comparators. Another way of solving the problem is by using only one comparator and varying V1. This does not give us the best results but its a good enough compromise for us.

In order to vary V1 I decided that I would use a capacitor and a resistor as a reference for my voltage. A capacitor stores energy while a resistor dissipates energy. What I would do is charge the capacitor fully then slowly let the voltage across the capacitor and resistor drop. RC Circuit – View Full Simulation

In the circuit above a short pulse of a high voltage is sent to the capacitor. The capacitor gets fully charged then discharges across the 1k resistor. The diode prevents the current from flowing back to the source. This creates a well defined pattern. The equation for the discharge is; $V = V_0 e^{-\frac{t}{RC}}$

# Python Asyncio tutorial

Note: This tutorial only works beyond python 3.5.3

Python has always been known as a slow language with poor concurrency primitives. Yet no one can deny that python is one of the most well thought out languages. Prototyping in python is way faster than most languages and the rich community and libraries that have grown around it make it a joy to use. Recently I had to write a web scraper for a side project. The scraper uses hackernews firebase API to collect all comments. Unfortunately hackernews API requires us to make a query for each individual comment. Put simply this sucks… items on the front page can accumulate up to 500 comments. That means 500 network requests per story and there are hundreds of stories in the queue that need to be analyzed. doing this on a single thread is slow… A friend of mine implemented a multithreaded version with a 2x speed up but I felt things could be faster. The next morning I ported my old code to use python’s asyncio primitives, and lo behold I managed to cut processing time from 10 minutes down to 35 seconds… Fantastic isn’t it?

Sadly when I was hunting the internet I found very little beginner friendly documentation about asyncio (IMHO the docs are quite good). So here it is.

### What is asynchronous IO and asyncio?

The concept behind asynchronous IO is that there are certain functions such as making network calls, reading files from your disk, writing files or making database queries which are slow. When your program makes such a request it stops the whole program until the process is completed.

import requests
def do_something(response):
...
r = requests.get('https://www.example.com/')
do_something(r)


### OK you’ve ranted enough, just show me how to code it!

In this tutorial I will be using the aiohttp library so you might want to install it if you have not already.

pip install aiohttp

Now we are ready to proceed, lets make an asynchronous request:

import aiohttp
import asyncio
session = aiohttp.ClientSession()

async def get_page(url):
f = await session.get(url)
print("completed request to %s"%url)
return f

Notice the two new keywords, async and await? These make a function asynchronous. meaning that once the interpreter hits the line await, the interpreter will stop the function and carry on to do something else till session.get() is completed. Internally what happens is when you call get_page(“www.example.com”), it returns what is known as a future. A future is similar to a python generator it just returns an object which is awaitable. i.e. Rather than execute the whole function it returns the sate at which we are currently awaiting and an event handler that tells us how to proceed when the said function completes.  You can only use await inside an async function definition.

#### But wait you haven’t shown me how to call it…

Enter Event Loops.

Event loop is essentially a loop that keeps waiting for events as its name implies. For instance when the computer has received a response in the server it interrupts the event loop to call the appropriate event handler. In things like GUI toolkits the main function is a giant event loop to handle all sorts of events coming in from the keyboard and mouse. In order to get our asynchronous function to do our bidding we need to create an event loop.

loop = asyncio.get_event_loop()
future = get_page("www.example.com")
loop.run_until_complete(future)

Now we can execute the code and see we have successfully made a network request.  loop.run_until_complete() will keep running until the request is complete. Your code up to now should look something like this:

import aiohttp
import asyncio
session = aiohttp.ClientSession()

async def get_page(url):
f = await session.get(url)
print("completed request to %s"%url)
return f

loop = asyncio.get_event_loop()
future = get_page("www.example.com")
loop.run_until_complete(future)

It doesn’t seem any faster than my request example!

### Making multiple requests in parallel

So remember how I said this can lead to a ridiculous increase in speed. Well it only does so when you are doing multiple things at the same time. Now let me introduce a simple function: asyncio.gather(*futures). What asyncio.gather(*futures) does is it chains together multiple futures thus allowing you to control them all at once. Lets modify our code like so:

import aiohttp
import asyncio
session = aiohttp.ClientSession()

async def get_page(url):
f = await session.get(url)
print("completed request to %s"%url)
return f

loop = asyncio.get_event_loop()
future1 = get_page("www.example.com")
future2 = get_page("www.example1.com")
loop.run_until_complete(asyncio.gather(future1,future2))

Now run this code. Notice that both futures start around the same time but which one returns first is not guaranteed. This is because future1 runs in parallel to future2.

### What if I want to modify the list of futures running in parallel?

You can.  Lets say you are building a web crawler like google and you want to be able to add new things to a breadth first search. This can be done using the asyncio.wait() function.

import aiohttp
import asyncio
session = aiohttp.ClientSession()

async def get_page(url):
f = await session.get(url)
print("completed request to %s"%url)
return f

loop = asyncio.get_event_loop()
future1 = get_page("www.example.com")
future2 = get_page("www.example1.com")
old_timers = asyncio.gather(future1,future2)
completed, incomplete = loop.run_until_complete(asyncio.wait(old_timers, return_when=asyncio.FIRST_COMPLETED))
loop.run_until_complete(asyncio.gather([new_comer]+incomplete))

Here you can see a much more complex example. where while we are running the event loop we are injecting new futures inside. Notice how to retrieve the results one must call the .result() function.

# Rotation Invariant Histogram Of Gradients Via DFT

I have successfully used HoG for a number of object recognition tasks. In fact I have found it to be a ridiculously good image descriptor particularly for rigid bodies. It can easily differentiate between various objects given very few training samples and is much faster than any of the existing object detection algorithms. While I know the current trend among researchers is toward CNNs, I also feel that CNNs are quite computationally expensive and require extensive training as opposed to HoGs which can be easily implemented on embedded devices.

One of the major problems with HoG though is that HoG is not rotation invariant. The naïve way of solving this problem is by shifting the histogram bins one by one then measuring the distance between bins. Naturally, this is a very expensive process on a large scale. The time complexity of this would be $O(n^2)$. Naturally it would be more convenient to find something that performs the same task in a faster manner.

The task of shifting the histogram bin by bin smelled awfully like correlation – in fact it is correlation. One of the cool properties of correlation is that it has a nice mathematical property:

Let $F(x)$ be the Fourier transform of $x$. $\star$ is the correlation operator. $a\star b = F^{-1}(F(x)^{*}F(x))$

So lets try it.

I generated a random histogram:

import matplotlib
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
arr = np.random.rand(20)
plt.plot(arr)
plt.show() Next I wrote a shift function:

def shift_array(array,quantity):
shifted_array = [0.0]*len(array)
i = 0
while i < len(array):
shifted_array[i-quantity] = array[i]
i+=1
return shifted_array
arr_shifted = shift_array(arr,3)
plt.plot(arr_shifted)
plt.show() Now lets try the trick:

arr1 = np.fft.rfft(arr)
arr2 = np.fft.rfft(arr_shifted)
output = np.fft.irfft(np.conj(arr1)*arr2)
plt.plot(output)
plt.show() Lets try on arr_shifted:

arr_shifted2 = shift_array(arr,9)
arr1 = np.fft.rfft(arr)
arr2 = np.fft.rfft(arr_shifted2)
output2 = np.fft.irfft(np.conj(arr1)*arr2)
plt.plot(output2)
plt.show() Notice the peak shifted! This could well tell us how much shift there is. Lets check to see:

print(20-np.argmax(output2))
print(20-np.argmax(output))

This gives us peaks at  9 and 3. Exactly the amount we shifted!

But how about comparison? While we can determine the angle of best fit, how do we know whether its actually a fit? Lets take a look:

arr_diff = np.random.rand(20)
arr1 = np.fft.rfft(arr)
arr2 = np.fft.rfft(arr_diff)
output2 = np.fft.irfft(np.conj(arr1)*arr2)
plt.plot(output2)
plt.show() Notice how there is no clear peak. One could use simple mean and stdev to show that there is no strong peak. This would in effect give us an estimate for how strongly correlated the values are.

# DIY 3D Scanner Via Inverse Square Law

When you look around the market, there are many different types of 3D scanners and 3D sensors out there. A quick look at PCL’s IO library lists a number of different sensors out there.  Microsoft has their Kinect which use Time Of Flight to measure depth. The PR2 robot uses projected stereo mapping (i.e. It uses a projector to project a pattern onto the floor then it uses visual cues from the projected pattern to reconstruct the depth of the pattern). Then there are laser range finders which use rotating mirrors to scan depth. Some sensors use multiple camera like we use our two eyes to estimate depth. All of these methods have their own problems. Measuring time of flight is challenging and expensive, rotating mirrors with laser are slow and unsafe, stereoscopic methods are still quite inaccurate. Enter 3D Scanning Via Inverse Square Law. This is a new technique that I have hacked together using one LED and a camera. It is somewhat like projected stereo mapping but simpler and less computationally expensive.

### Theory

So how does it work? In a traditional time of flight imaging system a pulse of light is sent out and the time it takes for the reflection to come back is used to gauge the distance. Light is very fast so the time difference for most objects is so minuscule that it makes the electronics very complicated. There is something else that changes with time over a distance – Power. The further we move away from a light source the less light we receive. This is characterised by the inverse square law. $P_{detected} = \frac{P_{source}}{r^2}$

Since power is very easy to measure, in theory it should be quite easy to measure the distance from the source using this technique. The challenge arises because the brightness of the source is not constant. Not all materials reflect light in the same way. If we were to imagine a set up like the one below: Let us assume that the light source emits light at a power of $L$.

Then the object receives $\frac{L}{P^2}$  amount of light.

If its albedo is $a$ then the light reflected back is: $\frac{aL}{P^2}$

According to the inverse square law the light that reaches the camera is: $\frac{aL}{Q^2P^2}$

In most cases the $P \approx Q$. Therefore the above equation can be rewritten to: $\frac{aL}{P^4}$

Now we can control $L$ by modulating the brightness of the light source. Thus we are left with a system of simultaneous equations: $\frac{aL_1}{P^4}$ $\frac{aL_2}{P^4}$

These equations can be solved quite easily by using images of two different brightness. If we solve for $P$ we will get the distance between the object and the camera.

### Practical Implementation

In theory this all looks very good so how do we actually implement it? My hardware looks like this: As you can see theres a blue LED, an Arduino and Logitech webcam. The arduino controls the brightness of the LED and the webcam observes. The code for the arduino is as follows:

https://create.arduino.cc/editor/arjo129/b42a42e1-00fd-499b-bd9f-c1915d9eea61/preview?embed

I know theres an analog write function but I decided to be fancy about it and do the timing myself.

The app itself is available on github: https://github.com/arjo129/LightModulated3DScan

It is written in C++ using the Qt framework and OpenCV (I’ve hard coded the path to opencv, you’ll need to change it). Theres still a lot of work to be done to get it into production quality.

Here is my first scan:

You can see the object has been digitised quite well.

Here is another scan of my mom’s glasses and a tissue box in the backdrop. The actual scan was done in broad daylight. The code isn’t perfect. It currently uses unsigned chars to deal with the computation. This leads to the problem of saturation. You can see this problem on the hinge of the glasses because of the proximity of the glasses. Another image which makes this problem even clearer is this one: Here you can see that the parts nearest the light source and with the highest albedo are over saturated. This problem would be solved if I transitioned the code to floating point arithmetic as opposed to unsigned 8 byte integers.

# Expanding Vocabulary Capabilities of µspeech

In order to take into account the varying complexities of vocabulary used we looked at offline algorithms for skewness. Now I propose a simpler method. Skewness requires a very large number of multiplications and divisions to be performed. This makes it unsuitable for quick calculations on an AVR as the AVR architecture used by an arduino does not contain a multiply instruction, so multiplication is implemented by software not hardware. In order to speed the process up, I came up with another simple yet effective (but not reliable) measure of skewness.

We treat every utterance (syllable) as a probability distribution of individual phonemes. To test the order of which phoneme comes first we record the number of phonemes per unit time. So every 16 cycles we sample the number of phonemes, and from that one keeps a catalog of statistical features

Skewness can be looked at graphically: Figure 1. Skewness of a unimodal distribution

As one can see the mode of a negatively skewed distribution is to the right of the center of the distribution. Thus the mode – median is positive. This would be opposite for a positively skewed graph. So µSpeech uses this property to determine the order of a sound in an utterance.

# Regarding µSpeech 4.1.2+

Dear Users of µSpeech library,
With the release of the 4.2.alpha library I have been reciveing a lot of mail as to how the debug µSpeech program does not work. I sincerely, apologize for this inconvenience as I have very little time on hand to address the issues associated with 4.1.2+ and 4.2.alpha. I realized that I had made the error of not flagging 4.1.2 as a pre-release: I have fixed this now. I also realized that the documentation that I have on youtube is back dated I will be addressing this as soon as possible.

If anyone has the time and is willing to look into why debug_uspeech is giving trouble, please feel free to create a pull request on github. Due to the fact that I am currently going through my school finals, I am unable to devote the time required to address this issue. Please revert back to µSpeech 4.1.1.

Sincere Apologies,

Arjo Chakravarty

P.S. If you wish to play with 4.2.alpha then there is a copy of the original debug uSpeech here: https://gist.github.com/arjo129/9c73ec533b2b1a6f91b4/

# Wiring two Arduino’s to do your bidding (using I2C)

Ok, so making rovers with one Arduino is so overdone. What if you had two rovers that worked together to solve problems. Or even turn. This can be fairly challenging. In order to achieve communication between to Arduino’s, one can use the two wire interface, known as I2C. I2C is a method of communicating between the Arduino’s. One can connect 100 Arduino’s all on the same line to make a super-duino, but for now lets keep it down just to two.

Many peripherals can be attached to your Arduio (µC) via this protocol. One Arduino acts as the all powerful overlord (a.k.a master). Schematic of generic I2C circuit (courtesy wikipedia)

Lets put this in context of two Arduino’s. Here’s how you wire them: Schematic for your Arduino’s, squiggly lines correspond to resistors (courtesy instructables).

So one Arduino will be the master and one the slave. It’s up to you which one you decide will do what – however the master is programmed separate from the slave. So you will need two programs: one for master, and one for slave.

Now for the code. The actual I2C protocol is fairly complex, however the folks at Arduino have simplified the process significantly. They do this through the use of a library. A library stores extra code which may be useful in order to simplify your life. The library it is stored in is known as Wire, so at the top of both the programs attach:

#include <Wire.h>

Now lets deal with the commands that can be sent: left, right, stop and reverse. We can use something known as preprocessor directives to handle these commands. Again at the top of both the master and slave copy and paste the following:

#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

Now we will have to deal with the slave differently. Because I2C supports multiple devices, each slave device has its own address (a number that identifies it anywhere between 1-127). So in the void setup() function the following needs to be placed:

void setup(){
Wire.begin(6);
}

You have declared your slave as having the address 6. This is kind of like the IP address of a computer, or the URL which points you to this website. Next you have to create a function which responds when the master comes around to command the slave.

void setup(){
Wire.begin(6);
}

We have not yet defined followCommand, the function that will follow the master’s orders, but we shall implement it now (full code for slave listed):

#include <Wire.h>
#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

void followCommand(int command){
if(command == LEFT){
//Write code for turning your robot left
}
if(command == RIGHT){
//Write code for turning your robot right
}
//You get the idea...
}
void setup(){
Wire.begin(6);
}
void loop(){
}

We have now implemented a slave, but we need to implement a master. This is easier. The master has no address so the setup would look like this:

void setup(){
Wire.begin();
}

To send a signal the following commands can be used:

  Wire.beginTransmission(6); // transmit to device #6
Wire.send(LEFT);              // sends LEFT Change to whatever
Wire.endTransmission();

So the master program would look something like this:

#include <Wire.h>
#define LEFT 0
#define RIGHT 1
#define STOP 2
#define REVERSE 3

void setup(){
}