Regarding µSpeech 4.1.2+

Dear Users of µSpeech library,
With the release of the 4.2.alpha library I have been reciveing a lot of mail as to how the debug µSpeech program does not work. I sincerely, apologize for this inconvenience as I have very little time on hand to address the issues associated with 4.1.2+ and 4.2.alpha. I realized that I had made the error of not flagging 4.1.2 as a pre-release: I have fixed this now. I also realized that the documentation that I have on youtube is back dated I will be addressing this as soon as possible.

If anyone has the time and is willing to look into why debug_uspeech is giving trouble, please feel free to create a pull request on github. Due to the fact that I am currently going through my school finals, I am unable to devote the time required to address this issue. Please revert back to µSpeech 4.1.1.

Sincere Apologies,

Arjo Chakravarty

 

P.S. If you wish to play with 4.2.alpha then there is a copy of the original debug uSpeech here: https://gist.github.com/arjo129/9c73ec533b2b1a6f91b4/

Using skewness to perform syllable recognition – Part I (Theory)

Currently the syllable class contains an accumulator for various letters. If these letters exist it corresponds to a special syllable. I however think that this is fairly limited because “fish” and “shift” will be interpreted as the same word. Yet, µSpeech should be capable of better: we are able to tell when an individual letter has been said. For instance the occurrence of “f” in fish should occur more at the beginning, where as the occurrence of “f” in “shift” should occur at the end.

For solving this problem I have been considering two methods: The skewness and a method based on pure calculus. Today I will explore Skewness.

Skewness – The theory

Skewness is a measure of how something leans to one side. Its statistical definition is:

\frac{\mu_3}{\sigma^3}

Where \mu_3 is the 3rd moment of the mean i.e E[(x-\mu)^3].

Now going back to High school mathematics:

E[(x-\mu)^3] = E[x^3 - 3x^2\mu + 3x\mu^2 - \mu^3] = E[x^3] - 3\mu E[x^2] + 3\mu^2 E[x] - \mu^3

Given that \mu = E[x]:

E[(x-\mu)^3] = E[x^3] - 3\mu (E[x^2] - \mu^2) - \mu^3

Since \sigma^2 = E[x^2]-\mu^2:

E[(x-\mu)^3] = E[x^3] - 3\mu \sigma^2 - \mu^3

This results in the necessity for the algorithm to compute 3 variables: E[x^3], \mu, \sigma^2

Making the algorithm online

Now given that µSpeech has stringent memory requirements it seems imperative that we devise the algorithm so that the following can be done:

  • Data is not kept in an array.
  • Computation is minimal.

Thus keeping these in mind it seems necessary to look at ways of computing the three variables online. To start with lets tackle the simplest algorithm, the one for µ.

Given that \mu = E[x] = \sum n_i p_i = \sum \frac{x_i}{n}, one can write an algorithm as such:

\mu_i = \mu_{i-1} + \frac{x_i - \mu_{i-1}}{i}

This can be extended to E[x^3]:

E[x^3]_i = E[x^3]_{i-1} +\frac{x^3_i-E[x^3]_{i-1}}{i}

The third value which we need to compute \sigma^2.

\sigma^2 = E[x^2]-E[x]^2

So we need to find E[x^2]:

E[x^2]_i = E[x^2]_{i-1} +\frac{x^2_i-E[x^2]_{i-1}}{i}

Part 2 coming soon.

µSpeech 4.0 coming soon with codebender.cc support

The µSpeech library is almost ready to be deployed as version 4.0. A number of bug fixes have been performed and new features have been added. Among the new features is a way to store and compare words. It also makes it significantly easier to calibrate the library. Among other things I have augmented the documentation with a video for calibrating the latest version of the library. If you are interested in trying it before hand:


git clone -b 4.0-workingBranch https://github.com/arjo129/uSpeech.git

UPDATE: 4.0 is now mainstream, just go to the downloads page to download the latest version.

This is how to calibrate the phoneme recognizer:

The new API Docs will be coming up soon. I am in the process of getting it updated at codebender.cc so those of you who use it to program your arduino can enjoy the benefits of µSpeech

Complete list of changes:

  • Update code bender support.
  • Fix vowel detection.
  • Package in Arduino IDE friendly format.
  • Video documentation.
  • Improve ease of use.
  • New API for easy word recognition

How µSpeech works

Its been over a year since I posted anything. This does not mean that I was not doing anything, but rather that I was working on other stuff (and also the Chinese government has blocked WordPress and I only got a VPN recently). One of the things that I had been working on was µSpeech, a speech recognition software for the Arduino. This had originally seemed crazy as speech recognition was a very computationally demanding process. Clocked at a couple of megahertz and with Kilobytes of RAM, the Arduino could not afford to use a standard speech recognition algorithm.

Most speech recognition algorithms involve the use of a process known as the Fast Fourier Transform (FFT). For those who are unaware, the FFT is a process which takes a sound and splits it into its constituent frequencies. Now, the FFT is not something that is particularly easy to do. In fact contrary to its name, it is an extremely slow process. The innovation in µSpeech is that it bypasses this process – at a cost: µSpeech is only able to differentiate between fricatives and voiced fricatives. Its ability is therefore limited, but it is good enough for being able to differentiate between commands such as “Left”, “right”, “Forward” and “Backward.”

The thing about fricatives (such as: /f/, /s/, /sh/) is that if you touch your throat, you realize that the vocal chords play no role in making these sounds. This means that these sounds are made entirely by the mouth and the air coming out of it. The key here is that this means that these sounds have an inherent tendency to be more like noise and have higher frequencies. If you were to look at a graph plotting the air pressure over time, the sounds of /s/ has a very chaotic graph that zigzags a lot which is not the case with the sound of /a/. Thus I found that the following formula works well:

\large{ c = \frac{\sum |\frac{df(x)}{dx}|}{\sum |f(x)|}}

Letters such as /s/ result in a very high value of c, where as letters such as /a/ result in a low value. Voiced fricatives such as /v/ result in a value that is just in between.

I found that the value for c falls within a certain range depending on the letter (and microphone). Thus when you calibrate µSpeech, you are essentially tweaking the threshold values. It generally takes a full afternoon to get them right!