Robert B on March 7th, 2014

I’ve been interested on and off in the Voynich Manuscript, mostly because it appeals to my appreciation of old occult books. Looking at the book, it’s clear there is a lot of structure in its writing. It’s clearly not, say, solely a work of art such as the Codex Seraphinianus. The history of the Voynich and the cracking attempts against it can be found all over the place. I particularly enjoyed Nick Pelling’s book, Curse of the Voynich. It might be fun, I thought, to take a look at it myself.

First step was assigning letters to each glyph. There is a standard format called EVA that has been used to transcribe the Voynich, but I found it too cumbersome. Although EVA can be transformed to any other format due to its expressiveness, that transformation would also rely on an already-completed transcription. I really wanted to start with a fresh eye.

Warning! Yes, I am aware of other researchers’ theories. The idea is that I’ll use my own theory and see where it takes me. Maybe I’ll make a bad decision. In any case, I reserve the right to modify my theories!

Using the images of the Voynich as my starting point, I looked through the first few pages and settled on a basic alphabet of glyphs, closely modeled on EVA:

Voynich glyphs

Again, this is just my initial take: 25 glyphs. I’m not taking into account other, more rare glyphs at this point. Major differences between my notation and EVA are that my ‘g’ is EVA ‘m’, my ‘v’ is EVA ’n’, my ‘q’ is EVA ‘qo’, and my ‘x’ is EVA ‘ch’. I also specifically include an ‘m’ and an ’n’ symbol for EVA ‘iin’ and ‘in’, respectively.

The one ‘q’ I found with an accent above it, I just decided to render as ‘Q’. Likewise, the ‘x’ with an accent above I render as ‘X’. The tabled gallows characters ‘f’, ‘k’, ‘p’, and ’t’ are represented as capital letters. This doesn’t mean that they are the same character. This is just a mapping of glyphs to ASCII.

Next, I transcribed some pages using this mapping. I decided to use ‘^’ for a character indicating the beginning of a “paragraph”, including the beginning of any obviously disconnected text areas. Similarly, ‘$’ marks the end of a “paragraph”. A period is a space between “words”. Occasionally I had to make a judgement call as to whether there was a space or not. Furthermore, lines always end in either a period or, if the line is the last in a paragraph, an end-paragraph symbol.

Finally, any character that I could not figure out or that doesn’t fit in the mapping is replaced by an asterisk.

Thus, the first paragraph on the first page is transcribed as follows:

F1r

F1r t

Again, there are many caveats: I had to make judgement calls on some glyphs and spacing. I’m also aware that Nick Pelling has a theory that how far the swoop on the v goes is significant, and clearly I’m ignoring that.

In any case, after transcribing a few pages, I became aware of certain patterns, such as -am nearly always appearing at the end of a word, and three-letter words being common. I did a character frequency analysis, and found that ‘o’ was the most common character, with ‘y’, ‘a’, and ‘x’ being about 50% of the frequency of ‘o’. Way at the bottom were ‘F’, ‘f’, and ‘Q’.

Then I did another frequency analysis, this time of trigrams. Most common were ‘xol’, ‘dam’, and ‘xor’. Then I asked, how often do the various trigrams appear at the beginning of a word and at the end of a word? The top five beginning trigrams were ‘xol’, ‘dam’, ‘xor’, ‘Xol’, and ‘xod’, while the top five ending trigrams were ‘xol’, ‘xor’, ‘dam’, ‘Xol’, and ‘ody’.

Are these basic lexemes? I began to suspect that lexemes could encode letters or syllables.

I turned to page f70v2, which is the Pisces page. There were 30 labels, each next to a lady. The labels were things like ‘otolal’, ‘otalar’, ‘otalag’, ‘dolarag’… Most looked like the words were composed of three two-letter lexemes: ‘ot’, ‘ol’, ‘al’, ‘ar’, ‘dol’, ‘ag’, and so on.

So I decided to look for more lexemes by picking the most common trigram, ‘xol’, and looking for words containing it. These were: ‘xol’, ‘otxol’, ‘o*xol’, ‘ypxol’, ‘dxol’, ‘xolam’, ‘xolo’, ‘xololy’, ‘xolTog’, ‘opxol’, ‘btxol’, ‘xoldy’, ‘xolols’. That would make as the lexemes ‘ot-’, ‘yp-‘, ‘d-‘, ‘-am’, ‘-o’, ‘-oly’, ‘-Tog’, ‘op-‘, ‘bt-‘, ‘-dy’, and ‘-ols’.

Similarly, for ‘xor’, we get lexemes ‘d-‘, ‘xeop-‘, ‘k-‘, ‘ot-‘, ‘bp-‘, ‘ok-‘, ’t-‘, ‘-am’, and ‘Xk-‘. At least in the first few pages.

So certain lexemes seemed to come up often, namely ‘ot-‘, ‘ok-‘, ‘op-‘, ‘d-‘, ‘-am’, ‘-dy’.

One possibility that no doubt has already been discounted decades ago is that each lexeme corresponds to a letter. So something common like ‘xol’ could be ‘e’. Because there are so many lexemes, clearly a letter could be encoded by more than one lexeme. Or maybe each lexeme encodes a syllable, so ‘xol’ could be ‘us’. I tend to doubt the polyalphabetic hypothesis, since then I would expect to see perhaps more uniform statistics.

Maybe the labels in Pisces encode numbers. If that’s the case, then by Benford’s Law, ‘ot-‘ would probably encode the numeral 1, since that appears in the first position 16 out of 30 times, and ‘ok-‘ could be 2, appearing 8 times.

Anyway, that’s as far as I’ve gotten. 

Tags:

Robert B on January 12th, 2013

In the previous article on Restricted Boltzmann Machines, I did a variety of experiments on a simple data set. The results for a single layer were not very meaningful, and a second layer did not seem to add anything interesting.

In this article, I’ll work with adding sparsity to the RBM algorithm. The idea is that without somehow restricting the number of output neurons that fire, any random representation will work to recover the inputs, even if that representation has no organizational power. That is, the representation learned will likely not be conducive to learning higher-level representations. Sparsity adds the constraint that we want only a fraction of the output neurons to fire.

The way to do this is by driving the bias of an output neuron more negative if it fires too often over the training set. Or, if doesn’t fire enough, increase the bias. Octave code here. The specific function I added is lateral_inhibition.

I used the same data set based on horizontal and vertical lines, 5000 patterns. I settled on 67 output neurons, 500 epochs, with momentum, changing halfway through. I decided that a fraction of 0.05 would be interesting, meaning that on average I would want 67 x 0.05 = 3.35 output neurons activated over all 5000 patterns. In order to compensate for the tendency for biases to be very negative due to the sparsity constraint, I set the penalty for weight magnitudes to zero so that weights can become stronger to overcome the effect of the bias.

The change in bias is controlled by a sparsity parameter. I ran the experiment with various sparsity parameters from 0 (no sparsity) to 100, and here are the costs and activations:

LibreOfficeScreenSnapz003

LibreOfficeScreenSnapz004

The magnitude of the sparsity parameter doesn’t seem to have much effect. Although you can’t tell from the graph, there is in fact a small downward trend in the average active outputs. Right around 10, the average active outputs reaches the desired 3.35, where it stays up to about 80, and then it starts dropping again. So 10 seems like a good setting for this parameter.

Here are the patterns that each neuron responds to, with differing sparsity parameters:

 

Sparsity 0:

OctaveScreenSnapz008

Sparsity 1:

OctaveScreenSnapz009

Sparsity 5:

OctaveScreenSnapz010

Sparsity 10:

OctaveScreenSnapz011

Sparsity 50:

OctaveScreenSnapz012

Sparsity 100:

OctaveScreenSnapz013

Sparsity 200:

OctaveScreenSnapz014

With no sparsity, we get the expected near-random plaid patterns, and nearly all neurons have something to say about any given pattern. With even a little sparsity, however, the patterns do clean themselves up, although not by much, and by sparsity 200, the network learns nothing at all.

One possibility that the patterns really don’t look that sparse is that we wanted the average neurons activated over the entire data set to be 5%. But how much of the data set actually contains a non-empty image? In fact, about 49% of the data set is empty.

What if we require that no data instance be empty? This time a sparsity of 0.05 ends up with a relatively terrible log J of -1.8, compared to the previous result of about -2.4. However, increasing the sparsity to 0.07 gives us a log J of -2.6, which is better than before. This is also expected, since more neurons will be able to represent patterns more closely. And yet, we get better representation anyway:

Sparsity 10:

OctaveScreenSnapz015

Sparsity 20:

OctaveScreenSnapz016

Sparsity 50:

OctaveScreenSnapz017

Sparsity 100 has very poor results.

The visualization is a bit misleading, because although there are pixels that are other than full white, those pixels don’t imply that the neuron will be activated with high probability for those other pixels. The maximum weight turns out to be 11.3, with the minimum being -4.7. The visualization routine clips the values of the weights to [-1,+1] meaning that anything -1 or lower is black, while anything +1 or higher is white. However, by using visualize(max(W+c’, -0.5)), we can take into account some of the threshold represented by the (reverse) bias from output to input. We also clip at -0.5 so that we can at least see the outlines of each neuron.

So here is another run with sparsity 50:

OctaveScreenSnapz018

We can see that, in fact, each neuron does respond to a different line, and that just about 18 lines are represented, as expected.

Tags:

Robert B on December 29th, 2012

After fiddling around with the part from the previous article, I think I might have a reverse engineered technical diagram. I still don’t know enough about early 20th century mechanical design techniques to know if this is what they would have done, but it should be enough to at least remanufacture this part.

I also realized that I haven’t actually described the part! There are two registers on the typical Monroe calculator, an upper register which indicates operation count (useful for multiplication and division) and a lower register which indicates total. There’s a crank which, when turned one way, zeroes out the upper register, and when turned the other way, zeroes out the lower register. The part that I reverse engineered is shown in the original 1920 US patent 1,396,612 by Nelson White, “Zero setting mechanism” in Figure 5. In the patent, the part, 32, is described as follows:

 

The shaft 60 is normally locked or held against rotation by a rigid arm 32, pivoted upon the shaft 84, and at its free end engaging a peripheral notch 33, of a plate or disk 34, secured to the gear 12…

 

So the next step might be to make an OpenSCAD file for the part, and put it on Thingiverse so that anyone can recreate the part. It probably can’t be 3D-printed at this point, since it really needs to be a metal part. Even Shapeways, which can 3D print metal parts from stainless steel combined with bronze, can only achieve a 1mm detail, and this thing is much more detailed than that.

Full-sized files in various formats: AI | PDF | SVG | PNG

UPDATE: See the thing on Thingiverse.

Carriage pawl reveng

Tags: ,

Robert B on December 16th, 2012

Or,

Numerology that sorta kinda works!

One of my half-baked projects is to take apart an early 20th century Monroe mechanical calculator and reverse engineer it so that I have a full set of engineering diagrams of every part. This would enable anyone to recreate broken parts and fix their calculator.

Reverse engineering the design of an early 20th century mechanical part has a lot in common with numerology. If the numbers coincidentally fit, then they’re probably right. If they almost, but not quite fit, then they’re probably right anyway.

Here’s a part that I scanned on an Epson Perfection V700 scanner. This scanner is based on a CCD, not LiDe, which means that it has non-zero depth of field. That means that you can scan a part that has height and it won’t end up too blurry. I scanned the part at 1400 dpi so that I could optically measure it. The thing sticking out at the top is just a screwdriver that I used to hold the part horizontal.

Lower register pawl

I could pop this into Illustrator and use the pen tool to trace around the part, but all this would get me is an outline of this particular part with no insight into why it had that particular outline. This part was designed, not evolved. It was designed to work with other parts. So clearly its measurements and the relationships between one bit of the part and another are not arbitrary.

For example, take the hole at the top. It fits over a shaft. Now, “the ancients” probably didn’t use shafts of arbitrary diameter. They were standard, and since this was an American design, that meant fractions of an inch, specifically inverse powers of two: 1/2, 1/4, 1/8, and so on. The hole at the top measures between 0.187″ and 0.188″ on my calipers. But 3/16″ is 0.1875″, so it makes sense that the engineers designed this hole to be exactly 3/16″. This fits around the shaft that is 0.001″ under 3/16″, which I suppose is a standard undersized shaft.

The 1910 Cyclopedia of Mechanical Engineering, edited by Howard Raymond, has this to say on page 129 in the section on mechanical drawing: “Keep dimensions in even figures, if possible. This means that small fractions should be avoided… Even figures constitute one of the trade-marks of an expert draftsman. Of course a few small fractions, and sometimes decimals, will be necessary. Remember, however, that fractions must in every case be according to the common scale; that is, in sixteenths, thirty-seconds, sixty-fourths, etc.; never in thirds, fifths, sevenths, or such as do not occur on the common machinist’s scale.”

In Illustrator, I pulled up the image and drew a circle of diameter 3/16″, placing it so that it fit exactly into the hole in the image. Now I had the center of that hole, and I could draw more concentric circles. Because I could measure these diameters directly on the part, I used those diameters: 3/8″ and 7/16″.

Adobe Illustrator CS6ScreenSnapz002

The measurements in the image were done using VectorScribe.

Note that while the inner circle and middle circle (diameter 3/8″) fit exactly, the outer circle (7/16″) does not. The outer circle does not seem to be quite concentric, but numerology: if it’s nearly right, it probably is. By moving the outer circle a few thous, I was able to get a good registration. Under high magnification, I was able to tell that the inner subpart was welded onto the sheet metal subpart, so all this indicates that there were several steps involved in manufacturing this part: first, turn the small subpart on a lathe. Then create the larger subpart from sheet metal. Then weld the two together. Welding the two together was apparently not an extremely exact procedure.

After moving the large circle to its new center, the centers no longer coincide.

Adobe Illustrator CS6ScreenSnapz003

Now for the rest of the part. Using SubScribe, I drew a circular arc on a circular-looking feature. Then I measured its radius.

Adobe Illustrator CS6ScreenSnapz004

0.283″ x 2 = 0.566″ is close enough to a diameter of 9/16″ to say that this was the intent of the original engineer. I drew the circle, and then measured the distance between the centers.

Adobe Illustrator CS6ScreenSnapz005

0.568″ is again close enough to 9/16″. And not coincidentally, this second center coincides precisely with the location of another shaft on the machine. That certainly nails down the intent of the engineer.

I can now draw the inner tangent line between the two circles (done again using SubScribe):

Adobe Illustrator CS6ScreenSnapz006

The length and angle of this line in fact do not matter, since there is one and only one inner tangent line connecting these two circles in the right direction. Certainly 0.269 is close to 7/64, but that was not the design constraint. The line had to be tangent to the two circles, and drawing inner and outer tangent lines were geometric constructions that were familiar to the ancients.

We can now draw another concentric circle corresponding to the outer outline of the part, and draw an outside tangent line. Again, knowledge of design intent lets us set the outer circle’s diameter at 7/8″, which seems to fit precisely onto the part.

Adobe Illustrator CS6ScreenSnapz007

The more excitable among you may have noticed by now that the inner surface of the inner tab appears to be a circular arc, and you would be right. Drawing the circle freehand gives us a diameter of 0.439″, which is close enough to 7/16″ as to fix that measurement. But right now I won’t analyze the tab, since I want to get the larger part done.

Near the bottom of the part, we can draw some tangent lines.

Adobe Illustrator CS6ScreenSnapz008

I did this in SubScribe by picking a point on the straight section of the part, then drawing a line tangent to the circle. Then I extended the line outwards. Now those lines could have started anywhere on the circle. Why these particular points? Let’s draw some lines intersecting the centers. I’ll also rotate the diagram so that the inter-center segment is horizontal.

Adobe Illustrator CS6ScreenSnapz009

The angle that the rightmost tangent line forms with the horizontal, 67.35 degrees, is irrelevant, since the constraint for that intersection point was based on an outer tangent line. But consider the angle formed between that angle and the next intersection: 125.71 – 67.35 = 58.36 degrees. This is close to 60 degrees, a nice round angle. For the intersection in the inner circle, the angle is 171.76 – 67.35 = 104.41, which is very close to 105 degrees, which is 60 + 45, more round angles. So the design intent seems clear: the outer intersection is 60 degrees from the outer tangent line, while the inner intersection is 45 degrees away from that. Let’s move the intersections and construct the tangent lines so these relations become exact.

Adobe Illustrator CS6ScreenSnapz010

As mentioned above, this pawl fits into a hole on a gear located on a shaft. There are three shafts so far, let’s call them A, B, and C. The pawl fits on shaft A, goes around shaft B, and the gear it locks is on shaft C. We know that the distance between shaft A and shaft B is 9/16″. I also know from direct measurement that the distance between shaft C and shaft B is also 9/16″. However, the distance between shaft A and shaft C is irregular: 0.977″, not close to any fraction at all. This may be due to some constraint that we do not yet know about.

However, let’s pretend that 0.977″ is eventually determined through some constraint, and place the location of shaft C on the diagram.

(Update, 17 Dec 2012: It turns out that a line drawn from B perpendicular to A-C has a length of very close to 9/32″, which makes A-C tangent to the 9/16″ diameter circle around B. Maybe that’s why the shafts are where they are: A-B is 9/16″, B-C is 9/16″, and A-C is tangent to the 9/16″ diameter circle around B.)

Adobe Illustrator CS6ScreenSnapz012

I also put a circle of diameter 3/16″ (shaft C’s diameter), and another of diameter 3/8″ around shaft C’s center, which corresponds to the size of shaft C’s bushing where the hole is. You can imagine the pawl fitting into a hole on the bushing by looking at the diagram.

It seems fairly clear that the pawl’s end is designed to fit into the hole. The end also isn’t square; it is tapered. Remember that inner tab? There is a cam which that inner tab rides on. The large diameter for the cam measures 17/32″, and the small diameter measures 29/64″. When the cam is rotated so that the large diameter pushes the inner tab, the pawl lifts out of the hole. When the cam small diameter is against the tab, the pawl is inside the hole. I can add the two cam diameters and then rotate the image of the pawl to simulate the two states.

In the hole (locked state):

Adobe Illustrator CS6ScreenSnapz016

Out of the hole (unlocked state)

Adobe Illustrator CS6ScreenSnapz017

Clearly one design criteria we can deduce is that the width of the pawl’s end at the outside of the hole when the pawl is in the locked state must be equal to the width of the hole, and the pawl must thereafter taper. Measuring the width of the pawl in the locked state at the hole gives 0.081″. Perhaps not surprisingly, this is the diameter of the hole as measured with calipers. In fractions, this is near enough to 13/16″, a drill size that any mechanical engineer would have had.

Adobe Illustrator CS6ScreenSnapz018

Here I’ve drawn the outline of the hole along with its centerline. I’ve made the depth just deep enough for the pawl’s end. We can see that the tapered pawl end does indeed fit in the hole.

Adobe Illustrator CS6ScreenSnapz019

Resetting the part to its design position, I found that I could draw a line along the outer outline of the pawl tangent to shaft C’s outline:

Adobe Illustrator CS6ScreenSnapz020

The angle formed between the C-B line and the beginning of the construction line is 129 degrees. It is not a very round angle, and if the angle were not too important, it would make more sense to have it be round, perhaps divisible by 5.

Another possibility is to look at the angle formed by a radial line with the intersection:

Adobe Illustrator CS6ScreenSnapz021

Relative to our reference angle at the right, this is 91.57 degrees. Too far away from 90 degrees; a line placed at 90 degrees intersects the upper outline nowhere near the right place. The radial line is also 31.57 degrees from the 60-degree line. This could be significant, since 31.5 is exactly 7/10 of 45 degrees, and placing a radial line at 31.5 degrees produces an intersection very nearly at the drawn intersection.

I don’t know enough about early 20th century mechanical design techniques to know if this would be reasonable: angles measured in tenths of 45 degrees.

If you’d like to have a try at figuring this out, here’s the Illustrator file.

Tags:

Robert B on August 16th, 2012

The Restricted Boltzmann Machine is an autoencoder which uses a biologically plausible algorithm. It uses a kind of Hebbian learning, which is the biologically plausible idea that “neurons that fire together, wire together”.

Suppose we have an input layer of dimension Nin and an output layer of dimension Nout, with no hidden layer in between. All input units are connected to all output units, but input units are not connected to each other, and output units are not connected to each other, which is where the Restricted comes in the name.

Let the input units be binary, so either 0 or 1. Further, each output unit uses the logistic function, so that the output of unit j is:

bj is the bias term for output unit j, and wij is the weight from input unit i to output unit j. Note that we’re treating the logistic function as a probability, but this time the outputs are binary rather than the probability, so that an output unit j is on with probability pj. That is, the output unit is stochastic.

Now, to make this an autoencoder, we want to feed the outputs backwards to the inputs, which works as follows:

We’re just doing the same thing in reverse, except that there is a bias ci now associated with each input unit. There is some evidence that this kind of feedback happens biologically.

After this downward pass, we perform one more upward pass, but this time using the probability input rather than the reconstructed input:

We do this for every input sample, saving all the values for each sample. When all the m samples have been presented (or, in batch learning, when a certain proportion m of the samples have been presented) we update the biases and weights as follows:

And that’s the Restricted Boltzmann Machine. There are other refinements such as momentum and regularization which I won’t cover, but which are implemented in the example Octave file. There are also many extremely helpful hints as to parameter settings in Hinton’s paper A practical guide to training restricted Boltzmann machines.

As an example (Octave code here), I set up a 9×9 array of inputs which correspond to pixels (0=off, 1=on), so my input dimension is 81. I generate a bunch of samples by placing some random vertical and horizontal lines in, but with more lines being less likely. According to the paper, a good initial guess as to the number of output units is based on the number of bits a “good” model of the input data would need, multiplied by 10% of the number of training cases, divided by the number of weights per output unit.

Each sample has zero horizontal lines with probability 0.7, one horizontal line with probability 0.21, and two horizontal lines with probability 0.09, with a vanishingly small probability of three lines, and likewise with vertical lines, and each line can be at any of nine positions, which means that a “good enough” model of the input would need something like 9 bits for the horizontal lines and 9 bits for the vertical lines, or 18 bits. With 3000 or so samples, 300 x 18 / 81 = 67 output units.

I also used 2000 epochs, averaging weights through time, a momentum term which starts at 0.5, then switches to 0.9 halfway through, a learning rate which starts at 1, then switches to 3 halfway through, and a regularization parameter of 0.001.

I set aside 10% of the samples as cross-validation, and measure the training and cross-validation costs. The cost (per sample per pixel) is defined as:

In other words, this is the usual logistic cost function, except that instead of the output, we’re using the probability during the downward pass that an input pixel is 1.

Here are 16 example input samples:

Samples

And here are the trajectories of the training and cross-validation costs over the 2000 epochs. Strictly speaking, the cost is a better measure than error in the reconstructed input since it is not as affected by chance pixel flips during reconstruction (output to input) as a direct pixel-to-pixel comparison. For example, if an input pixel should be on, and it is only turned on 90% of the time during reconstruction, then the cost for that pixel is -ln 0.9 =  0.105. If we were to actually reconstruct that pixel, then 10% of the time the error would be 1, and 90% of the time the error would be 0; but we only do a single evaluation. So the cost gives us a better idea of how likely a pixel is to be correct.

Logj layer1

Blue is the training cost, while red is the cross-validation cost. We can see that the cross-validation cost is a little higher than the training cost, indicating that there is likely no over fitting or under fitting going on. The sudden acceleration at epoch 1000 is due to changing the momentum at that point. The end cost is about 0.004 while the cross-validation cost is about 0.005.

Here is a view of what the weights for each of the 67 output units encode:

Weights1

Interestingly, only one output unit seems to encode a single line. The others all seem to encode linear combinations of lines, some much more strongly than others. The data shows that on average, 38 of the 67 output units are active (although not all the same ones), while at most 51 are active (again, not all the same ones).

Varying the number of output units affects the final cost, apparently with order less than log N. The end cost for 200 units is about 0.002, as is the cross validation cost. The average number of activated outputs appears to be a little over half.

Logjn layer1

Act layer1

We can learn a second layer of output units by running all the input samples through to the first output layer, and then using their binary outputs as inputs to another RBM layer. We would be using probabilistic binary outputs, so it is important to have enough samples that the next layer gets a good idea of what the input distribution is. We can use the probability outputs directly, but I’ve found, at least with this toy problem, that this doesn’t seem to lead to significantly better results.

To try this, I’ll use 100 units in the first layer, which could be overkill, and a variable number of units in the second layer, from 10 to 200. To get the cost, I can run the output all the way back to the input layer. Here’s the result in log cost per pixel:

J layer2

So this isn’t so good: the error rate for the second layer is much higher than that for the first layer. One possibility is that the first layer is so good that the second layer is not necessary at all. But then I would have thought we would get at least the same error rate.

That’s where I’ll leave this article right now. Possible future investigations would be more complex inputs, and why layer 2 refuses to be as good as layer 1.

Tags: ,

Robert B on July 22nd, 2012

Over the years, I’ve bagged some technological items from before I was born, and some nostalgic items from the 80s.

 

DSC00114

A Radio Shack 40-155 “Personal Stereo Speaker System”,
a 1985 Sony WM-F12 Walkman with headphones,
and a 1982 Radio Shack PC-2 Pocket Computer in its case.

 

DSC00123

1980 Sound Gizmo and 1980s Merlin


DSC00118

Arithma Addiator (with its case and stylus), a film sprocket thing,
a Burroughs punched card printing plate for CanTabCo, and small wooden slide rule


DSC00119

Modern fakes, but in the corner is a roll of J. L. Hammett aluminum foil
for a mimeograph 


DSC00120

Vacuum tubes, CRTs, voltmeter


DSC00122

Dymo label machine, photomultiplier tube,
some individually wrapped screws for the Air Force 

Robert B on July 16th, 2012

An autoencoding algorithm is an unsupervised learning algorithm which seeks to recreate its input after processing the output. The layer or layers between input and output then become a representation of the input, which may have fewer or more dimensions than the input.

If the internal layer is further restricted so that only a very few of its components are active for any given input, then it is a sparse autoencoder. Generally, if the dimension of the internal layer is less than that of the input layer, then the autoencoder is performing, appropriately enough, dimension reduction. If, however, the number of dimensions is greater, then we enter the realm of feature detection, which, to me anyway, is a much more interesting application of autoencoding. In addition, feature detection appears to be how the brain handles input.

One of the challenges of feature detection is to ensure the internal layers don’t degenerate to a trivial representation of the input, that is, simply repeating the input so that each feature is simply an input feature.

I’ll start by talking about autoencoding via backpropagation. Before we tackle this, I’d like to rehash the mathematics of backpropagation, but this time in matrix form, which will be much easier to handle. So feel free to skip if you’re not really interested.

 

Backpropagation, a more thorough derivation

We start with the same diagram as before:

Neural network

This time, however, we’ll use matrix notation. The equation for the vector of activations for layer l is as follows:

where:

  • a(l) is a column vector of sl elements (i.e. an sl x 1 matrix), the activations of the neurons in layer l,
  • b(l) is a column vector of sl elements (i.e. an sl x 1 matrix), the biases for layer l, equivalent to a fixed input 1 multiplied by a bias weight, separated out so we don’t have to deal with a separate and somewhat confusing input augmentation step,
  • W(l-1) is an sl x sl-1 matrix for the weights between layer l-1 and layer l, and
  • g is a squashing function, which we can take to be the logistic function (for range 0 to 1) or the tanh function (for range -1 to 1). Or really any differentiable function.

A quick sanity check for z = Wa + b: W is sl x sl-1, a is sl-1 x 1, so multiplying W x a cancels out the middle, yielding sl x 1, which is consistent with the definitions for z and b.

Now, the cost function for a single data point x(i),y(i) is as follows:

|| a – y || is simply the Euclidean distance between a and y, otherwise known as the L2 norm. Note also that it is a scalar, and not a vector or matrix.

The cost over all data points, and adding a regularization term, is:

That last term simply means to take every weight between every neuron and every other neuron in every layer, square it, and add. We don’t take any of the bias terms into the regularization term, as usual.

Now, first, we want to determine how gradient descent moves W(L-1) and b(L):

This just says that we move W downhill in “J-space” with respect to W, and the same with b. Note that since W(L-1) is an sL x sL-1 matrix, then so too must the derivative of J with respect to W(L-1) be. And now let’s compute those derivatives. First, the derivative with respect to the weights in the last layer:

Note that we just called the derivative of g with respect to its argument, g’. For the logistic and tanh functions, these are nice, compact derivatives:

Since the argument of g (being z(L,i)) is an sL x 1 matrix, so too is its derivative. a(L-1,i) is an sL-1 x 1 matrix, its transpose is a 1 x sL-1 matrix, and thus g’ x a is an sL x sL-1 matrix, which is consistent with what we wanted the size of the derivative of J with respect to W(L-1) to be. 

And now with respect to the bias on the last layer:

Let us define:

Note that this is an sL x 1 matrix. It is the contribution to the weight or bias gradient due to an “error” in output. We can now define our derivatives more compactly:

Now, what about the derivatives with respect to the previous layer weights and bias? The key insight in backpropagation is that we can generalize these derivatives as follows. For l from L to 2 (we start from L because these are recursive equations) we have:

A rigorous mathematical treatment for this is so completely outside the scope of this article as to be invisible :) But the general argument is that delta represents the contribution of a layer to the gradient based on the error between desired output and generated output. For the final layer, this is straightforward, and we can directly calculate it. However, for an internal layer, it is as if the errors from the next layer have propagated backwards through the weights, and so we can calculate, from output to input, the contributions of each layer.

 

Backpropagation, the algorithm

First, zero out an accumulator for each layer. The accumulators have the same dimensions as the weight and bias matrices. So for l from 2 to L:

Second, compute all the forward activations a for a single data point. So, for l from 2 to L, we have:

Compute the delta terms for l from L to 2, and add to the accumulators:

Next, after doing the above two steps for each data point, we compute the gradients for l from 2 to L:

Finally, we use these gradients to go downhill, for l from 2 to L:

That is one round of updates. We start from zeroing out the accumulators to do the next iteration, and continue until it doesn’t look like the cost is getting any lower.

Instead of the above, we could provide a function which, given W and b, computes the cost and the derivatives. Then we give that function to a library which does minimization. Sometimes minimization libraries do a better job at minimizing than manually doing gradient descent, and some of the libraries don’t need a learning parameter (alpha).

 

Adding a sparseness criterion

The whole reason for going through the derivation and not going straight to the algorithm was so that we could add a sparseness measure in the cost function, and see how that affects the algorithm.

First, if we have d dimensions in the input, then an autoencoder will be a d:1:d network.

We will first determine the average activation of layer 2 over all data points:

Note that this is an s2 x 1 matrix. To be sparse, we want the values of each element to be very low. If we’re using a logistic function, this means near to zero. If we’re using the tanh function, near to -1, but we will rescale the average activation to lie between 0 and 1 by adding 1 and dividing by 2.

Let us denote our target sparsity for each element as ρ, so that we want our measured sparsity to be close to that. Clearly we don’t want ρ=0, because that would give us a trival solution: zero weights everywhere.

For a sparsity cost, we will use the following measure, known as the Kullback-Leibler divergence:

Note that the sum applies element-by-element to the measured sparsity vector, and so the cost is a scalar. This cost is zero when each measured sparsity element is equal to the desired sparsity, and rises otherwise.

We add this cost to our main cost function as follows:

where β is just a parameter whereby we can tune the importance of the sparsity cost.

Without going through the derivation, we use the following altered delta for layer 2 during backpropagation:

That scalar term added due to sparsity is computed only after all data points have been fed forwards through the network, because that is the only way to determine the average activation of layer 2. It is independent, therefore, of i. So the modification to backpropagation would require this:

  1. Going through the data set, compute all the way up to layer 2, and accumulate the sum of the activations for each neuron in layer 2.
  2. Divide the sums by m, the number of points in the data set.
  3. Perform one iteration of backpropagation.
  4. Go back to step 1.

 

Why I don’t like this

It is said that backpropagation is not biologically plausible, that is, it cannot be the algorithm used by the brain. There are several reasons for this, chief among which is that errors do not propagate backwards in the brain.

A sparse backpropagating autoencoder is doubly implausible, because not only does it rely on backpropagation, but it also requires that we wait until all data points are presented before determining the average activation. It would be much nicer if we had something more biologically plausible, if only because I have the suspicion that any algorithm that is not biologically plausible cannot lead to human-level intelligence.

So in the next article, I’ll talk about a biologically plausible algorithm called the reverse Boltzmann machine.

Tags: , ,

Robert B on July 8th, 2012

K-means clustering is the first unsupervised learning algorithm in this series. Unsupervised means that the answer is not available to the learning algorithm beforehand, just the cost of a potential solution. To me, unsupervised learning algorithms are more exciting than supervised learning algorithms because they seem to transcend human intelligence in a way. An unsupervised learning algorithm will seek out patterns in data without any (or with few) hints. This seems especially important when, as the human, we don’t know what the hints could possibly be.

The Google “Visual Cortex” project shows how powerful unsupervised learning algorithms can be: from millions of unlabeled images, the algorithm found generalized categories such as human faces and cat faces. It is easy to see that if the same thing could be done with an audio stream or a text stream, the streams could be combined at a high enough level for association to produce sounds and text for images, images for text and sounds, and at a high enough level, reasoning.

The K-means clustering algorithm treats data as if it were in clusters centered around some number of points k, one cluster per point. Conceptually, the algorithm picks k centroid points, assigns each point in the data to a cluster based on how close it is to which cluster’s centroid, moves each centroid to the center of its cluster, and repeats. The result is a set of centroids which minimizes the distances between each point and its associated cluster’s centroid.

The cost function is:

where Ci is cluster i, and μi is the centroid for cluster i.

There are a few methods for picking the initial centroids. One method, the Forgy method, involves picking k random points from the data set to be the initial centroids. Another method, the Random Partition method, assigns each data point to a random cluster, then produces the initial centroids for each cluster. Regardless of the initial method, the algorithm proceeds by repeating the following two steps:

First, produce the clusters by assigning each data point to one cluster. This means comparing the distance of a point to each centroid, and assigning the point to the cluster whose centroid yields the lowest distance.

Second, calculate the centroid of each resulting cluster.

Repeat these steps until the total cost does not change.

 

A Concrete Example

I implemented the above algorithm in Java and ran it on the usual concrete strength data set. As usual, I set aside 20% of the data set as a cross-validation. But a problem quickly became apparent: how many clusters should I use? Clearly the more clusters, the less the overall cost would be simply because there would be more centroids.

One solution is to try different numbers of centroids and ask if there is an obvious point where there is not a lot of improvement in the cost. Here is what I found from k=2 through 10:

Kmeans

Blue is the training cost, while red is the cross-validation cost. Interestingly, the cross-validation cost was always below the training cost, indicating that the cross-validation points represent well the training points. There is clearly no overfitting because there is no large gap between costs. However, there is no obvious point at which increasing the number of clusters doesn’t help much.

The other solution to the number of clusters relies on evaluating different numbers of clusters only after later processing. If downstream processing works better with a certain number of clusters, then that number of clusters should be chosen. So, for example, if I put each cluster’s data through a neural network, how good is the error for each number of clusters?

I trained an 8:10:10:1 neural network on each cluster of points, so k=2 had 2 networks, and k=10 had 10 networks. I used fewer hidden neurons than before, on the theory that each cluster has less data, meaning that I can probably get away with a smaller parameter space. Here are the results:

Kmeansnn

Clearly the more clusters and the more networks, the better the output. Perhaps because more networks means smaller clusters, which in turn means less variation to account for. Interestingly, 8 clusters works about as well as 5 clusters, and it’s only with 9 and 10 clusters that more advantage is found. In any case, choosing k=10, here are the errors:

Kmeansnn training

Kmeansnn cv

Compared to training a single 8:20:20:1 network on the entire data set, clustering has definitely reduced the errors. Most errors in the training set are now under 5% (down from 10% before), and even the one troubling point from before (error 100%) has been knocked down to an error of 83%. The low errors in the cross-validation points — which, remember, the network has never seen — all lead us to believe that the networks trained are not overfit.

I would still want to look at those high-error points, perhaps even asking for the experimental data for those points to be rechecked or even rerun. But for now, I would be happy with this artificially intelligent concrete master.

For the next article, I’m going to go off the syllabus of the Machine Learning course, and talk about one of my favorite unsupervised learning algorithms, the autoencoder.

Tags: ,

If we take a logistic function as in logistic regression, and feed the outputs of many logistic regressions into another logistic regression, and do this for several levels, we end up with a neural network architecture. This works nicely to increase the number of parameters as well as the number of features from the basic set you have, since a neural network’s hidden layers act as new features.

Neural network

Each non-input neuron in a layer gets its inputs from every neuron from the previous layer, including a fixed bias neuron which acts as the x0 = 1 term we always have.

Rather than θ, we now call the parameters weights, and the outputs are now called activations. The equation for the output (activation) of neuron p in layer l is:

Breaking it down:

  • w(l-1)pq is the weight from neuron p in layer l-1 to neuron q in layer l
  • a(l-1)pq is the activation of neuron p in layer l-1, and of course when p=0, the activation is by definition 1.
  • z(l)is the usual sum, specifically for neuron q in layer l.
  • g is some function, which we can take to be the logistic function.

So we see that the output of any given neuron is a logistic function of its inputs.

We will define the cost function for the entire output, for a single data point, to be as follows:

Note that we are using the linear regression cost, because we will want the output to be an actual output rather than a classification. The cost can be defined using the logistic cost function if the output is a classification.

Now, the algorithm proceeds as follows:

  1. Compute all the activations for a single data point
  2. For each output neuron q, compute:

  3. For each non-output neuron p, working backwards in layers from layer L-1 to layer 1, compute:

  4. Compute the weight updates as follows:

 

The last step can, in fact, be delayed. Simply present multiple data points, or even the entire training set, adding up the changes to the weights, and then only update the weights afterwards.

Because it is extraordinarily easy to get the implementation wrong, I highly suggest the use of a neural network library such as the impressively expansive Encog as opposed to implementing it yourself. Also, many neural network libraries include training algorithms other than backpropagation.

 

The Concrete Example

I used Encog to train a neural network on the concrete data from the earlier post. I first took the log of the output, since that seemed to represent the data better and led to less network error. Then I normalized the data, except I used the range 0-1 for both the days input and the strength output, since that seemed to make sense, and also led to less network error.

Here’s the Java code I used. Compile it with the Encog core library in the classpath. The only argument to it is the path to the Concrete_Data.csv file.

The network I chose, after some experimentation checking for under- and overfitting, was an 8:20:10:1 network. I used this network to train against different sizes of training sets to see the learning curves. Each set of data was presented to the network for 10,000 iterations of an algorithm called Resilient Backpropagation, which has various advantages over backpropagation, namely that the learning rate generally doesn’t have to be set.

Learning curve neural

As before, the blue line is the training cost, the mean squared error against the training set, and the red line is the cross-validation cost, the mean squared error against the cross-validation set. This is generally what I would expect for an algorithm that is neither underfitting nor overfitting. Overfitting would show a large gap between training and cross-validation, while underfitting would show high errors for both. 

If we saw underfitting, then we would have to increase the parameter space, which would mean increase the number of neurons in the hidden layers. If we saw overfitting, then decreasing the parameters space would be appropriate, so decreasing the number of neurons in the hidden layers would help.

Since the range of the output is 0-1, over the entire training set we get an MSE (training) of 0.0003, which means the average error per data point is 0.017. This doesn’t quite tell the whole story, because if an output is supposed to be, say, 0.01, and error of 0.017 means the output wasn’t very well-fit. Instead, let’s just look at the entire data set, ordered by value, after denormalization:

Errors training neural

Errors cv neural

The majority of errors fall under 10%, which is probably good enough. If I were concerned with the data points whose error was above 10%, I might be tempted treat those data points as “difficult”, try to train a classifier to train data points as “difficult” or “not difficult”, and then train different regression networks on each class.

The problem with that is that I could end up overfitting my data again, this time manually. If I manually divide my points into “difficult” and “not difficult” points, then what is the difference between that and having more than two classes? How about as many classes as there are data points?

What would be nice is if I could have an automatic way to determine if there is more than one cluster in my data set. One clustering algorithm will be the subject of the next post.

Tags: , ,

There is a fun archive of machine learning data sets maintained by UC Irvine. For a concrete example, let’s take the Concrete Compressive Strength data set and try linear regression on it. (Get it? Concrete? Ha ha ha!) There are 1030 points in the data set, eight input features, and one output feature. Here is the basic info:

Feature #

Name Range
1 Amount of Cement (kg/m3) 102 – 540
2 Amount of Blast Furnace Slag (kg/m3) 0 – 359.4
3 Amount of Fly Ash (kg/m3) 0 – 200.1
4 Amount of Water (kg/m3) 121.75 – 247
5 Amount of Superplasticizer (kg/m3) 0 – 32.2
6 Amount of Coarse Aggregate (kg/m3) 801 – 1145
7 Amount of Fine Aggregate (kg/m3) 594 – 992.6
8 Mixture Age (days) 1 – 365
Output Compressive Strength (MPa) 2.3 – 82.6

In keeping with the principle that the ranges of the features should be scaled to the range (-1, 1), we will subtract the midpoint of each range from each feature and divide by the new maximum. So, for example, the midpoint of feature 1 is 321, so subtracting brings the range to (-219, 219), and dividing by 219 brings the range to (-1, 1).

Here’s a plot of feature 1 versus the output. There’s a lot of variation, but it does sort of look roughly correlated.

Feature1 graph

Here’s the Octave code: concrete_regression.m. You’ll also need to open Concrete_Data.xls and export to CSV to Concrete_Data.csv so that Octave can read the file. Place both Octave and CSV files in the same directory, change to that directory, run Octave, and then call concrete_regression().

Here is the learning curve and the parameters found. The training cost is in blue, while the cross-validation cost is in red.

Learning curve

The training and cross-validation costs are very close to each other, which is good. It means that the learned parameters are quite representative of the entire data set, so there is no overfitting.

However, the cost appears to be quite high: about 0.037. This means that the output is, on average, off by 0.27. Which is, by nearly any standard, terrible. Clearly there is some intense underfitting going on, and the only remedy is to get more features and more parameters.

But we could combine the features in infinite ways. How are we going to find good ways to combine the features? We’ll take a look at neural networks in the next article, which essentially combines the features for us, and gives us more parameters as well.

Tags: ,

Powered by Web Design Company Plugins