<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Curious Insight]]></title><description><![CDATA[Technology, software, data science, machine learning, entrepreneurship, investing, and various other topics]]></description><link>https://www.johnwittenauer.net/</link><generator>Ghost 5.79</generator><lastBuildDate>Thu, 22 Feb 2024 22:09:13 GMT</lastBuildDate><atom:link href="https://www.johnwittenauer.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Book Of Five Rings]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;m a big believer in an idea that Nassim Taleb popularized in his Incerto series called the <a href="https://en.wikipedia.org/wiki/Lindy_effect?ref=johnwittenauer.net">Lindy effect</a>.  Put simply, it&apos;s a theory that the future life expectancy of some non-perishable thing is roughly proportional to it&apos;s current age.  The world has had</p>]]></description><link>https://www.johnwittenauer.net/the-book-of-five-rings/</link><guid isPermaLink="false">5e1b57817dc8fc00383b2ce0</guid><category><![CDATA[Book Review]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Mon, 23 Mar 2020 00:12:44 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;m a big believer in an idea that Nassim Taleb popularized in his Incerto series called the <a href="https://en.wikipedia.org/wiki/Lindy_effect?ref=johnwittenauer.net">Lindy effect</a>.  Put simply, it&apos;s a theory that the future life expectancy of some non-perishable thing is roughly proportional to it&apos;s current age.  The world has had smartphones for around 10 years so we can reasonably estimate them to be around for another 10 years or so before something replaces them.  Conversely (and this really illustrates the point), the technology we call a &quot;chair&quot; has been around for thousands of years, so one should expect that chairs will be relevant for a long time to come.  It&apos;s easy to think of scenarios where this rule of thumb fails of course, but that&apos;s not the point.  It&apos;s not a statistical prediction but rather an observation about the nature of things which exist in the world but do not age (an &quot;indicator of robustness&quot; as Taleb put it).</p>
<p>The Lindy effect applies to things like technologies, but also ideas, and by extension the modalities that we use to communicate ideas (i.e. books).  I&apos;ve been spending a lot more time lately thinking about the age of the books I&apos;m reading.  It&apos;s easy to focus on newer books because there are so many coming out, they are well-marketed, have catchy titles, get discussed a lot on podcasts etc.  But if the Lindy effect holds, it means that most of the best books in history were written a long time ago.  After incorporating this knowledge into my book selection strategy, I began seeking out old but enduring books that are still in print.  Such is how I came to read Miyamoto Musashi&apos;s masterpiece, &quot;The Book of Five Rings&quot;.</p>
<p>A typical biographical description of Musashi would begin by noting that he was a Japanese swordsman who lived in the 1600s.  But he wasn&apos;t just any swordsman - he is arguably the greatest swordsman who ever lived.  Musashi famously went undefeated in over 60 duels throughout his life (many of them to the death), a streak that no one else ever come close to matching.  His legend has been passed down through generations and remains deeply embedded in Japanese culture to this day.</p>
<p>In the later years of his life, Musashi wrote &quot;The Book Of Five Rings&quot; to codify the two-sword martial arts style he had spent his life mastering.  Although the book is principally about the details of his study of martial arts, its true value is much broader and much deeper.  Look beyond the surface and there&apos;s a life philosophy (which Musashi calls &quot;The Way&quot;) that one can gain insights from that apply to all aspects of our existence.</p>
<p>In the book, the five rings correspond to the five &quot;books&quot; or sections of the text which are meant to refer to the idea that there are different elements in battle, just as there are different physical elements in life.  Musashi named them Earth, Water, Fire, Wind, and Emptiness.  Below are some of my notes from each section.  One could describe the language Musashi uses as cryptic and hard to interpret, but to me that&apos;s sort of the point.  As I read through the book, I couldn&apos;t help but feel as though every statement has a deeper meaning that would only be revealed to me upon careful, deliberate reflection.  This process is still ongoing.</p>
<p>While I tried to capture the essence of the text in very short, concise passages, there is inevitably a great deal that was missed.  Consider these notes a starting point to uncovering the wisdom embedded in Musashi&apos;s work.</p>
<h3 id="earth">Earth</h3>
<p>The &#x201C;Way&#x201D; of something is a learned discipline or philosophy (i.e. the Way of Buddhism, the Way of the Carpenter).  The Way of Martial Arts, which Musashi refers to as &#x201C;Two Heavens, One Style&#x201D;, is to learn skills that are useful in all things.  The Way of the Martial Arts is a mastery of one&#x2019;s craft similar to carpentry, of which the sword is the essential martial art.  There is a rhythm to everything.  There is rhythm in the formless.  Victory is in knowing the rhythm of your opponent, in using a rhythm that is hard to grasp, and in developing a rhythm of emptiness rather than wisdom.</p>
<h3 id="water">Water</h3>
<p>Think deeply about the principles written in the book as though you discovered them yourself.  Make them part of yourself.  The mind should be centered, swaying peacefully.  Be watchful of the mind and do not let it become clouded.  Sharpen your wisdom.  Learn the good and bad of all things.  With every grip, stance, strike, do not think of the action itself.  Think only about cutting down your opponent.  With practice you will gradually grasp the principle of the Way.</p>
<h3 id="fire">Fire</h3>
<p>There are three initiatives to understand in order to defeat an opponent &#x2013; Initiative of Attack, Initiative of Waiting, and the Body-Body Initiative.  Knowing the conditions in which you find yourself means clearly observing your opponent and grasping the way to victory with certainty.  Become your opponent.  Move the shadow.  Control the light.  Impose fear.  Cause confusion.  Do not use the same tactic repeatedly.  The true Way of swordsmanship is to fight with your opponent and win.</p>
<h3 id="wind">Wind</h3>
<p>The True Way does not prefer a long or short sword, a forceful or weak stroke, specialize in a stance, or fix the eyes on a particular gaze.  It is not fast or slow, prefer interior or exterior positions, or dictate how to move your feet.  There is no &#x201C;best&#x201D; in any of these things.  There is only seeing through to its virtues with the mind.</p>
<h3 id="emptiness">Emptiness</h3>
<p>The heart of Emptiness is in the absence of anything with form and the inability to have knowledge thereof.  Knowing the existent, you know the nonexistent.  A warrior learns the way with certainty.  He has no confusion in his mind and is never lazy.  He polishes his mind and will, and sharpens the two eyes of broad observation and focused vision.  He clears away the clouds of confusion.  In Emptiness exists Good but no Evil.  Wisdom is Existence.  Principle is Existence.  The Way is Existence.  The Mind is Emptiness.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deep Learning With Keras: Recurrent Networks]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is the fourth in a series on deep learning using Keras.  We&apos;ve already looked at dense networks with category embeddings, convolutional networks, and recommender systems.  For this installment we&apos;re going to use recurrent networks to create a character-level language model for text generation. We&</p>]]></description><link>https://www.johnwittenauer.net/deep-learning-with-keras-recurrent-networks/</link><guid isPermaLink="false">5d61498f85354d0038a3e654</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sun, 25 Aug 2019 13:27:21 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is the fourth in a series on deep learning using Keras.  We&apos;ve already looked at dense networks with category embeddings, convolutional networks, and recommender systems.  For this installment we&apos;re going to use recurrent networks to create a character-level language model for text generation. We&apos;ll start with a simple fully-connected network and show how it can be used as an &quot;unrolled&quot; recurrent layer, then gradually build up from there until we have a model capable of generating semi-reasonable sounding text. Much of this content is based on Jeremy Howard&apos;s <a href="http://course.fast.ai/?ref=johnwittenauer.net">fast.ai lessons</a>. However, we&apos;ll use Keras instead of PyTorch and build out all of the code from scratch rather than relying on the fast.ai library.</p>
<p>The text corpus we&apos;re using for this task are the works of the philosopher Nietzsche. The whole corpus can be found <a href="https://s3.amazonaws.com/text-datasets/nietzsche.txt?ref=johnwittenauer.net">here</a>. Let&apos;s start by loading the data into memory and taking a peek at the beginning of the text.</p>
<pre><code class="language-python">%matplotlib inline
import io
import numpy as np
import keras
from keras.utils.data_utils import get_file

path = get_file(&apos;nietzsche.txt&apos;, origin=&apos;https://s3.amazonaws.com/text-datasets/nietzsche.txt&apos;)
with io.open(path, encoding=&apos;utf-8&apos;) as f:
    text = f.read().lower()

len(text)
</code></pre>
<pre>
600893
</pre>
<pre><code class="language-python">text[:400]
</code></pre>
<pre>
&apos;preface\n\n\nsupposing that truth is a woman--what then? is there not ground\nfor suspecting that all philosophers, in so far as they have been\ndogmatists, have failed to understand women--that the terrible\nseriousness and clumsy importunity with which they have usually paid\ntheir addresses to truth, have been unskilled and unseemly methods for\nwinning a woman? certainly she has never allowed herself &apos;
</pre>
<p>Now get the unique set of characters that appear in the text. This is our vocabulary.</p>
<pre><code class="language-python">chars = sorted(list(set(text)))
vocab_size = len(chars)
vocab_size
</code></pre>
<pre>
57
</pre>
<pre><code class="language-python">&apos;&apos;.join(chars)
</code></pre>
<pre>
&apos;\n !&quot;\&apos;(),-.0123456789:;=?[]_abcdefghijklmnopqrstuvwxyz&#xE4;&#xE6;&#xE9;&#xEB;&apos;
</pre>
<p>Let&apos;s create a dictionary that maps each unique character to an integer, which is what we&apos;ll feed into the model. The actual integer used isn&apos;t important, it just has to be unique (here we just take the index from the &quot;chars&quot; list above). It&apos;s also useful to have a reverse mapping to get back to characters in order to do something with the model output. Finally, create a &quot;mapped&quot; corpus where each character in the data has been replaced with its corresponding integer.</p>
<pre><code class="language-python">char_indices = {c: i for i, c in enumerate(chars)}
indices_char = {i: c for i, c in enumerate(chars)}
idx = [char_indices[c] for c in text]

idx[:20]
</code></pre>
<pre>
[42, 44, 31, 32, 27, 29, 31, 0, 0, 0, 45, 47, 42, 42, 41, 45, 35, 40, 33, 1]
</pre>
<p>We can convert from integers back to characters using something like this.</p>
<pre><code class="language-python">&apos;&apos;.join(indices_char[i] for i in idx[:100])
</code></pre>
<pre>
&apos;preface\n\n\nsupposing that truth is a woman--what then? is there not ground\nfor suspecting that all ph&apos;
</pre>
<p>For our first attempt, we&apos;ll build a model that accepts a 3-character sequence as input and tries to predict the following character in the text. For simplicity, we can just manually create each character sequence. Start by creating lists that take every 3rd character, offset by some amount between 0 and 3.</p>
<pre><code class="language-python">cs = 3
c1 = [idx[i] for i in range(0, len(idx) - cs, cs)]
c2 = [idx[i + 1] for i in range(0, len(idx) - cs, cs)]
c3 = [idx[i + 2] for i in range(0, len(idx) - cs, cs)]
c4 = [idx[i + 3] for i in range(0, len(idx) - cs, cs)]
</code></pre>
<p>This just converts the lists to numpy arrays. Notice that this approach resulted in non-overlapping sequences, i.e. we use characters 0-2 to predict character 3, then characters 3-5 to predict character 6, etc. That&apos;s why the array shape is about 1/3 the size of the original text. We&apos;ll see how to improve on this later.</p>
<pre><code class="language-python">x1 = np.stack(c1)
x2 = np.stack(c2)
x3 = np.stack(c3)
y = np.stack(c4)

x1.shape, y.shape
</code></pre>
<pre>
((200297,), (200297,))
</pre>
<p>Our model will use embeddings to represent each character. This is why we converted them to integers before - each integer gets turned into a vector in the embedding layer. Set some variables for the embedding vector size and the number of hidden units to use in the model. Finally, we need to convert the target variable to a one-hot character encoding. This is because the model outputs a probability for each character, and in order to score this properly it needs to be able to compare that output with an array that&apos;s structured the same way.</p>
<pre><code class="language-python">n_factors = 42
n_hidden = 256
y_cat = keras.utils.to_categorical(y)

y_cat.shape
</code></pre>
<pre>
(200297, 56)
</pre>
<p>Now we get to the first iteration of our model. The way I&apos;ve structred this is by defining the layers of the model so that they can be re-used across multiple inputs. For example, rather than create an embedding layer for each of the three character inputs, we&apos;re instead creating one embedding layer and sharing it. This is a reasonable approach to handling sequences since each input comes from an identical distribution.</p>
<p>The next thing to observe is the part where h is defined. The first character is fed through the hidden layer like normal, but the other characters in the sequence are doing something different. We&apos;re re-using the same layer, but instead of just taking the character as input, we&apos;re using the character + the previous output h. This is the &quot;hidden&quot; state of the model. I think about it in the following way: &quot;give me the output of this layer for character c conditioned on the fact that these other characters (represented by h) came before it&quot;.</p>
<p>You&apos;ll notice that there&apos;s no use of an RNN class at all. Basically what&apos;s going on here is we&apos;re implmenting an &quot;unrolled&quot; RNN from scratch on our own.</p>
<pre><code class="language-python">from keras import backend as K
from keras.models import Model
from keras.layers import add
from keras.layers import Input, Reshape, Dense, Add
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam

def Char3Model(vocab_size, n_factors, n_hidden):
    embed_layer = Embedding(vocab_size, n_factors)
    reshape_layer = Reshape((n_factors,))
    input_layer = Dense(n_hidden, activation=&apos;relu&apos;)
    hidden_layer = Dense(n_hidden, activation=&apos;tanh&apos;)
    output_layer = Dense(vocab_size - 1, activation=&apos;softmax&apos;)

    in1 = Input(shape=(1,))
    in2 = Input(shape=(1,))
    in3 = Input(shape=(1,))

    c1 = input_layer(reshape_layer(embed_layer(in1)))
    c2 = input_layer(reshape_layer(embed_layer(in2)))
    c3 = input_layer(reshape_layer(embed_layer(in3)))

    h = hidden_layer(c1)
    h = hidden_layer(add([h, c2]))
    h = hidden_layer(add([h, c3]))

    out = output_layer(h)

    model = Model(inputs=[in1, in2, in3], outputs=out)
    opt = Adam(lr=0.01)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt)

    return model
</code></pre>
<p>Train the model for a few iterations.</p>
<pre><code class="language-python">model = Char3Model(vocab_size, n_factors, n_hidden)
history = model.fit(x=[x1, x2, x3], y=y_cat, batch_size=512, epochs=3, verbose=1)
</code></pre>
<pre>
Epoch 1/3
200297/200297 [==============================] - 4s 20us/step - loss: 2.4007
Epoch 2/3
200297/200297 [==============================] - 2s 10us/step - loss: 2.0852
Epoch 3/3
200297/200297 [==============================] - 2s 10us/step - loss: 1.9470
</pre>
<p>In order to make sense of the model&apos;s output, we need a helper function that converts the character probability array that it returns into an actual character. This is where the reverse lookup table we created earlier comes in handy!</p>
<pre><code class="language-python">def get_next_char(model, s):
    idxs = [np.array([char_indices[c]]) for c in s]
    pred = model.predict(idxs)
    char_idx = np.argmax(pred)
    return chars[char_idx]

get_next_char(model, &apos; th&apos;)
</code></pre>
<pre>
&apos;e&apos;
</pre>
<pre><code class="language-python">get_next_char(model, &apos;and&apos;)
</code></pre>
<pre>
&apos; &apos;
</pre>
<p>It appears to be spitting out sensible results. The 3-character approach is very limiting though. That&apos;s not enough context for even a full word most of the time. For our next step, let&apos;s expand the input window to 8 characters. We can create an input array using some list comprehension magic to output a list of lists, then stacking them together into an array. Try experimenting with the logic below yourself to get a better sense of what it&apos;s doing. The target array is created in a similar manner as before.</p>
<pre><code class="language-python">cs = 8

c_in = [[idx[i + j] for i in range(cs)] for j in range(len(idx) - cs)]
c_out = [idx[j + cs] for j in range(len(idx) - cs)]

X = np.stack(c_in, axis=0)
y = np.stack(c_out)
</code></pre>
<p>Notice this time we&apos;re making better use of our data by making the sequences overlapping. For example, the first &quot;row&quot; in the data uses characters 0-7 to predict character 8. The next &quot;row&quot; uses characters 1-8 to predict character 9, and so on. We just increment by one each time. It does create a lot of duplicate data, but that&apos;s not a huge issue with a corpus of this size.</p>
<pre><code class="language-python">X.shape, y.shape
</code></pre>
<pre>
((600885, 8), (600885,))
</pre>
<p>It helps to look at an example to see how the data is formatted. Each row is a sequence of 8 characters from the text. As you go down the rows it&apos;s apparent they&apos;re offset by one character.</p>
<pre><code class="language-python">X[:cs, :cs]
</code></pre>
<pre>
array([[42, 44, 31, 32, 27, 29, 31,  0],
       [44, 31, 32, 27, 29, 31,  0,  0],
       [31, 32, 27, 29, 31,  0,  0,  0],
       [32, 27, 29, 31,  0,  0,  0, 45],
       [27, 29, 31,  0,  0,  0, 45, 47],
       [29, 31,  0,  0,  0, 45, 47, 42],
       [31,  0,  0,  0, 45, 47, 42, 42],
       [ 0,  0,  0, 45, 47, 42, 42, 41]])
</pre>
<pre><code class="language-python">y[:cs]
</code></pre>
<pre>
array([ 0,  0, 45, 47, 42, 42, 41, 45])
</pre>
<p>Since we have separate inputs for each character, Keras expects separate arrays rather than one big array. Also need to one-hot encode the target again.</p>
<pre><code class="language-python">X_array = [X[:, i] for i in range(X.shape[1])]
y_cat = keras.utils.to_categorical(y)
</code></pre>
<p>The 8-character model works exactly the same way as the 3-character model, there are just more of the same steps. Rather than write them all out in code, I converted it to a loop. Again, this is almost exactly the way an RNN works under the hood.</p>
<pre><code class="language-python">def CharLoopModel(vocab_size, n_chars, n_factors, n_hidden):
    embed_layer = Embedding(vocab_size, n_factors)
    reshape_layer = Reshape((n_factors,))
    input_layer = Dense(n_hidden, activation=&apos;relu&apos;)
    hidden_layer = Dense(n_hidden, activation=&apos;tanh&apos;)
    output_layer = Dense(vocab_size, activation=&apos;softmax&apos;)
    
    inputs = []
    for i in range(n_chars):
        inp = Input(shape=(1,))
        inputs.append(inp)
        c = input_layer(reshape_layer(embed_layer(inp)))
        if i == 0:
            h = hidden_layer(c)
        else:
            h = hidden_layer(add([h, c]))

    out = output_layer(h)

    model = Model(inputs=inputs, outputs=out)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt)

    return model
</code></pre>
<p>Train the model a bit and generate some predictions.</p>
<pre><code class="language-python">model = CharLoopModel(vocab_size, cs, n_factors, n_hidden)
history = model.fit(x=X_array, y=y_cat, batch_size=512, epochs=5, verbose=1)
</code></pre>
<pre>
Epoch 1/5
600885/600885 [==============================] - 9s 15us/step - loss: 2.1838
Epoch 2/5
600885/600885 [==============================] - 8s 13us/step - loss: 1.7245
Epoch 3/5
600885/600885 [==============================] - 8s 13us/step - loss: 1.5888
Epoch 4/5
600885/600885 [==============================] - 8s 13us/step - loss: 1.5200
Epoch 5/5
600885/600885 [==============================] - 8s 13us/step - loss: 1.4781
</pre>
<pre><code class="language-python">get_next_char(model, &apos;for thos&apos;)
</code></pre>
<pre>
&apos;e&apos;
</pre>
<pre><code class="language-python">get_next_char(model, &apos;queens a&apos;)
</code></pre>
<pre>
&apos;n&apos;
</pre>
<p>Now we&apos;re ready to replace the loop with a real recurrent layer. The first thing to notice is that we no longer need to create separate inputs for each step in the sequence - recurrent layers in Keras are designed to accept 3-dimensional arrays where the 2nd dimension is the number of timesteps. We just need to add an extra dimension to the input shape with the number of characters.</p>
<p>The second wrinkle is the use of the &quot;TimeDistributed&quot; class on the embedding layer. Just as with the input, this is another more convenient way of doing what we were already doing by defining and re-using layers. Wrapping a layer with &quot;TimeDistributed&quot; basically says &quot;apply this to every timestep in the array&quot;. Like the RNN, it expects (and returns) a 3-dimensional array. The reshape operation is the same story, we just add another dimension to it. The RNN layer itself is very straightforward.</p>
<pre><code class="language-python">from keras.layers import TimeDistributed, SimpleRNN

def CharRnn(vocab_size, n_chars, n_factors, n_hidden):
    i = Input(shape=(n_chars, 1))
    x = TimeDistributed(Embedding(vocab_size, n_factors))(i)
    x = Reshape((n_chars, n_factors))(x)
    x = SimpleRNN(n_hidden, activation=&apos;tanh&apos;)(x)
    x = Dense(vocab_size, activation=&apos;softmax&apos;)(x)

    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt)

    return model
</code></pre>
<p>Let&apos;s look at a summary of the model. Notice the array shapes have a third dimension to them until we get on the other side of the RNN.</p>
<pre><code class="language-python">model = CharRnn(vocab_size, cs, n_factors, n_hidden)
model.summary()
</code></pre>
<pre>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_12 (InputLayer)        (None, 8, 1)              0         
_________________________________________________________________
time_distributed_1 (TimeDist (None, 8, 1, 42)          2394      
_________________________________________________________________
reshape_3 (Reshape)          (None, 8, 42)             0         
_________________________________________________________________
simple_rnn_1 (SimpleRNN)     (None, 256)               76544     
_________________________________________________________________
dense_7 (Dense)              (None, 57)                14649     
=================================================================
Total params: 93,587
Trainable params: 93,587
Non-trainable params: 0
_________________________________________________________________
</pre>
<p>Reshape the input to match the 3-dimensional input format of (rows, timesteps, features). Since we only have one feature, the last dimension is trivially set to one.</p>
<pre><code class="language-python">X = X.reshape((X.shape[0], cs, 1))
X.shape
</code></pre>
<pre>
(600885, 8, 1)
</pre>
<p>Train the model for a bit. Notice that the loss looks almost identical to the last model! All we really did is shuffle things around to take advantage of some built-in classes that Keras provides. The model structure and performance should look no different than before.</p>
<pre><code class="language-python">history = model.fit(x=X, y=y_cat, batch_size=512, epochs=5, verbose=1)
</code></pre>
<pre>
Epoch 1/5
600885/600885 [==============================] - 11s 18us/step - loss: 2.2863
Epoch 2/5
600885/600885 [==============================] - 11s 18us/step - loss: 1.8356
Epoch 3/5
600885/600885 [==============================] - 10s 17us/step - loss: 1.6601
Epoch 4/5
600885/600885 [==============================] - 10s 17us/step - loss: 1.5672
Epoch 5/5
600885/600885 [==============================] - 10s 17us/step - loss: 1.5102
</pre>
<p>We can train it a bit longer at a lower learning rate to reduce the loss further.</p>
<pre><code class="language-python">K.set_value(model.optimizer.lr, 0.0001)
history = model.fit(x=X, y=y_cat, batch_size=512, epochs=3, verbose=1)
</code></pre>
<pre>
Epoch 1/3
600885/600885 [==============================] - 10s 17us/step - loss: 1.4350
Epoch 2/3
600885/600885 [==============================] - 10s 17us/step - loss: 1.4233
Epoch 3/3
600885/600885 [==============================] - 10s 17us/step - loss: 1.4175
</pre>
<pre><code class="language-python">def get_next_char(model, s):
    idxs = np.array([char_indices[c] for c in s])
    idxs = idxs.reshape((1, idxs.shape[0], 1))
    pred = model.predict(idxs)
    char_idx = np.argmax(pred)
    return chars[char_idx]

get_next_char(model, &apos;for thos&apos;)
</code></pre>
<pre>
&apos;e&apos;
</pre>
<p>Since the model is getting better, we can now try to generate more than one character of text. All we need is an initial seed of 8 characters and it can go on as long as we like. To do this, we&apos;ll create a simple helper function that continuously predicts the next character using the last 8 characters that it spit out (starting with the seed value).</p>
<pre><code class="language-python">def get_next_n_chars(model, s, n):
    r = s
    for i in range(n):
        c = get_next_char(model, s)
        r += c
        s = s[1:] + c
    return r

get_next_n_chars(model, &apos;for thos&apos;, 40)
</code></pre>
<pre>
&apos;for those who has not to be a conscience of the &apos;
</pre>
<p>It&apos;s definitely getting better. There are more improvements we can make though! In the current model, each instance of the data is completely independent. When a new sequence comes in, the model has no idea what came before that sequence. That &quot;hidden state&quot; mentioned earlier (which is now part of the RNN layer) gets thrown away. However, there&apos;s a way we can set this up that persists that hidden state through to the next part of the sequence. In other words, it conditions the output not only on the current 8 characters but all the characters that came before it as well.</p>
<p>The good news is that this capability is built into Keras&apos;s recurrent layers, we just need to set a flag to true! The bad news is that we need to re-think how the data is structured. Stateful models require 1) a fixed batch size, which is specified in the model input, and 2) that each batch be a &quot;slice&quot; of sequences such that the next batch contains the next part of each sequence. In other words, we need to split up our data (which is one long continuous stream of text) into n chunks of equal-length streams of text, where n is the batch size. Then, we need to carve up these n chunks into sequences of length 8 (which is the sequence length the model looks at) with the following character in each sequence being the target (the thing we&apos;re predicting).</p>
<p>If that sounds confusing and complicated, that&apos;s because it is. It took me a while to make sense of it (and figure out how to express it in code) but hopefully you can follow along. Below is the first step, which splits the data up into chunks and stacks them vertically into an array. The result is 64 equal-length continuous sequences of text.</p>
<pre><code class="language-python">bs = 64
seg_len = len(text) // bs
segments = [idx[i*seg_len:(i+1)*seg_len] for i in range(bs)]
segments = np.stack(segments)

segments.shape
</code></pre>
<pre>
(64, 9388)
</pre>
<p>One other change happening at the same time is we&apos;re no longer staggering the input by one character (which duplicates a lot of text because most of it is repeated in each row). Instead, we&apos;re now carving the data into chucks of non-overlapping characters like we did originally. However, we&apos;re going to make better use of it this time. Instead of just predicting character 8 based on characters 0-7, we&apos;re going to predict characters 1-8 conditioned on the characters in the sequence that came before them. Each pass will actually be 8 character predictions, and the loss function will be calculated across all of those outputs (we&apos;ll see how to do this in a minute).</p>
<p>Below we&apos;re creating a list of lists, where each sub-list is an 8-character sequence. The second list is offset by one (this is our target).</p>
<pre><code class="language-python">c_in = [segments[:,i*cs:(i+1)*cs] for i in range(seg_len // cs)]
c_out = [segments[:,(i*cs)+1:((i+1)*cs)+1] for i in range(seg_len // cs)]
</code></pre>
<p>Now we just need to concatenate and reshape these into arrays that we can use with the model. We end up with ~75,000 chunks of unique 8-character sequences.</p>
<pre><code class="language-python">X = np.concatenate(c_in)
X = X.reshape((X.shape[0], X.shape[1], 1))
y = np.concatenate(c_out)
y_cat = keras.utils.to_categorical(y)

X.shape, y_cat.shape
</code></pre>
<pre>
((75072, 8, 1), (75072, 8, 57))
</pre>
<p>Crucially, they are ordered such that the 65th row is a continuation of the 1st row, the 66th row is a continuation of the 2nd row, and so on all the way down.</p>
<pre><code class="language-python">&apos;&apos;.join(indices_char[i] for i in np.concatenate((X[0,:,0], X[64,:,0], X[128,:,0])))
</code></pre>
<pre>
&apos;preface\n\n\nsupposing that&apos;
</pre>
<p>Next we can create the stateful RNN model. It&apos;s similar to the last one but there are a few wrinkles. The input specifies &quot;batch_shape&quot; and has three dimensions (this is a hard requirement to use stateful RNNs in Keras, and gets quite annoying during inference time). We&apos;ve set &quot;return_sequences&quot; to true, which changes the shape that the RNN returns and gives us an output for each step in the sequence. We&apos;ve set &quot;stateful&quot; to true, the motivation for which was already discussed. Finally, we&apos;ve wrapped the last dense layer with &quot;TimeDistributed&quot;. This is because the RNN is now returning a higher-dimensional array to account for the output at each timestep. Everything else works basically the same way.</p>
<pre><code class="language-python">def CharStatefulRnn(vocab_size, n_chars, n_factors, n_hidden, bs):
    i = Input(batch_shape=(bs, n_chars, 1))
    x = TimeDistributed(Embedding(vocab_size, n_factors))(i)
    x = Reshape((n_chars, n_factors))(x)
    x = SimpleRNN(n_hidden, activation=&apos;tanh&apos;, return_sequences=True, stateful=True)(x)
    x = TimeDistributed(Dense(vocab_size, activation=&apos;softmax&apos;))(x)

    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt)

    return model
</code></pre>
<p>Looking at the output shapes, we can see the effect of turning on &quot;return_sequences&quot;. Note that the number of model parameters has not changed. The complexity is identical, we&apos;ve just changed the task and the information available to solve it.</p>
<pre><code class="language-python">model = CharStatefulRnn(vocab_size, cs, n_factors, n_hidden, bs)
model.summary()
</code></pre>
<pre>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_13 (InputLayer)        (64, 8, 1)                0         
_________________________________________________________________
time_distributed_2 (TimeDist (64, 8, 1, 42)            2394      
_________________________________________________________________
reshape_4 (Reshape)          (64, 8, 42)               0         
_________________________________________________________________
simple_rnn_2 (SimpleRNN)     (64, 8, 256)              76544     
_________________________________________________________________
time_distributed_3 (TimeDist (64, 8, 57)               14649     
=================================================================
Total params: 93,587
Trainable params: 93,587
Non-trainable params: 0
_________________________________________________________________
</pre>
<p>One quirk of using stateful RNNs is that we now have to manually reset the model state, it never goes away until we tell it to. I just created a simple callback that resets the state at the end of every epoch.</p>
<pre><code class="language-python">from keras.callbacks import Callback

class ResetModelState(Callback):    
    def on_epoch_end(self, epoch, logs):
        self.model.reset_states()

reset_state = ResetModelState()
</code></pre>
<p>Train the model for a while as before, with the addition of the callback to reset state between epochs.</p>
<pre><code class="language-python">model.fit(x=X, y=y_cat, batch_size=bs, epochs=8, verbose=1, callbacks=[reset_state], shuffle=False)
</code></pre>
<pre>
Epoch 1/8
75072/75072 [==============================] - 21s 277us/step - loss: 2.2509
Epoch 2/8
75072/75072 [==============================] - 20s 261us/step - loss: 1.8441
Epoch 3/8
75072/75072 [==============================] - 20s 263us/step - loss: 1.6865
Epoch 4/8
75072/75072 [==============================] - 19s 259us/step - loss: 1.6052
Epoch 5/8
75072/75072 [==============================] - 20s 261us/step - loss: 1.5540
Epoch 6/8
75072/75072 [==============================] - 20s 262us/step - loss: 1.5186
Epoch 7/8
75072/75072 [==============================] - 20s 261us/step - loss: 1.4922
Epoch 8/8
75072/75072 [==============================] - 20s 263us/step - loss: 1.4714
</pre>
<pre><code class="language-python">K.set_value(model.optimizer.lr, 0.0001)
model.fit(x=X, y=y_cat, batch_size=bs, epochs=3, verbose=1, callbacks=[reset_state], shuffle=False)
</code></pre>
<pre>
Epoch 1/3
75072/75072 [==============================] - 19s 259us/step - loss: 1.4280
Epoch 2/3
75072/75072 [==============================] - 19s 259us/step - loss: 1.4191
Epoch 3/3
75072/75072 [==============================] - 20s 264us/step - loss: 1.4154
</pre>
<p>The &quot;get next&quot; functions need to be updated since our approach has changed. One of the annoying things about stateful models is the batch size is fixed, so even when making a prediction it needs an array of the same size, no matter if we just want to predict one sequence. I got around this with some numpy hackery.</p>
<pre><code class="language-python">def get_next_char(model, bs, s):
    idxs = np.array([char_indices[c] for c in s])
    idxs = idxs.reshape((1, idxs.shape[0], 1))
    idxs = np.repeat(idxs, bs, axis=0)
    pred = model.predict(idxs, batch_size=bs)
    char_idx = np.argmax(pred[0, 7])
    return chars[char_idx]

def get_next_n_chars(model, bs, s, n):
    r = s
    for i in range(n):
        c = get_next_char(model, bs, s)
        r += c
        s = s[1:] + c
    return r

get_next_n_chars(model, bs, &apos;for thos&apos;, 40)
</code></pre>
<pre>
&apos;for those in the same the same the same the same&apos;
</pre>
<p>The output is actually a bit worse than before, but we&apos;re still using simple RNNs which aren&apos;t that great to begin with. The real fun comes when we make the jump to a more complex unit like the LSTM. The details of LSTM&apos;s are beyond my scope here but there&apos;s a great blog post that everyone links to as the canonical explainer for LTMS, which you can find <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/?ref=johnwittenauer.net">here</a>. This is the easiest step yet as the only thing we need to do is replace the class name. The only other change I made is increasing the number of hidden units. Everything else stays exactly the same.</p>
<pre><code class="language-python">from keras.layers import LSTM

n_hidden = 512

def CharStatefulLSTM(vocab_size, n_chars, n_factors, n_hidden, bs):
    i = Input(batch_shape=(bs, n_chars, 1))
    x = TimeDistributed(Embedding(vocab_size, n_factors))(i)
    x = Reshape((n_chars, n_factors))(x)
    x = LSTM(n_hidden, return_sequences=True, stateful=True)(x)
    x = TimeDistributed(Dense(vocab_size, activation=&apos;softmax&apos;))(x)

    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt)

    return model
</code></pre>
<p>LSTMs need to train for a bit longer. We&apos;ll do 20 epochs at each learning rate.</p>
<pre><code class="language-python">model = CharStatefulLSTM(vocab_size, cs, n_factors, n_hidden, bs)
model.fit(x=X, y=y_cat, batch_size=bs, epochs=20, verbose=1, callbacks=[reset_state], shuffle=False)
</code></pre>
<pre>
Epoch 1/20
75072/75072 [==============================] - 30s 401us/step - loss: 2.1748
Epoch 2/20
75072/75072 [==============================] - 29s 380us/step - loss: 1.6091
Epoch 3/20
75072/75072 [==============================] - 29s 381us/step - loss: 1.4487
Epoch 4/20
75072/75072 [==============================] - 28s 379us/step - loss: 1.3695
Epoch 5/20
75072/75072 [==============================] - 29s 383us/step - loss: 1.3181
Epoch 6/20
75072/75072 [==============================] - 29s 385us/step - loss: 1.2797
Epoch 7/20
75072/75072 [==============================] - 29s 382us/step - loss: 1.2500
Epoch 8/20
75072/75072 [==============================] - 29s 388us/step - loss: 1.2254
Epoch 9/20
75072/75072 [==============================] - 29s 380us/step - loss: 1.2052
Epoch 10/20
75072/75072 [==============================] - 28s 377us/step - loss: 1.1886
Epoch 11/20
75072/75072 [==============================] - 29s 385us/step - loss: 1.1754
Epoch 12/20
75072/75072 [==============================] - 28s 379us/step - loss: 1.1649
Epoch 13/20
75072/75072 [==============================] - 29s 385us/step - loss: 1.1563
Epoch 14/20
75072/75072 [==============================] - 29s 390us/step - loss: 1.1499
Epoch 15/20
75072/75072 [==============================] - 29s 383us/step - loss: 1.1447
Epoch 16/20
75072/75072 [==============================] - 28s 377us/step - loss: 1.1404
Epoch 17/20
75072/75072 [==============================] - 29s 384us/step - loss: 1.1371
Epoch 18/20
75072/75072 [==============================] - 29s 383us/step - loss: 1.1334
Epoch 19/20
75072/75072 [==============================] - 28s 379us/step - loss: 1.1328
Epoch 20/20
75072/75072 [==============================] - 28s 378us/step - loss: 1.1314
</pre>
<pre><code class="language-python">K.set_value(model.optimizer.lr, 0.0001)
model.fit(x=X, y=y_cat, batch_size=bs, epochs=20, verbose=1, callbacks=[reset_state], shuffle=False)
</code></pre>
<pre>
Epoch 1/20
75072/75072 [==============================] - 29s 382us/step - loss: 1.1015
Epoch 2/20
75072/75072 [==============================] - 32s 428us/step - loss: 1.0755
Epoch 3/20
75072/75072 [==============================] - 33s 442us/step - loss: 1.0633
Epoch 4/20
75072/75072 [==============================] - 30s 406us/step - loss: 1.0552
Epoch 5/20
75072/75072 [==============================] - 29s 381us/step - loss: 1.0489
Epoch 6/20
75072/75072 [==============================] - 28s 372us/step - loss: 1.0434
Epoch 7/20
75072/75072 [==============================] - 28s 372us/step - loss: 1.0392
Epoch 8/20
75072/75072 [==============================] - 29s 381us/step - loss: 1.0354
Epoch 9/20
75072/75072 [==============================] - 28s 376us/step - loss: 1.0323
Epoch 10/20
75072/75072 [==============================] - 28s 379us/step - loss: 1.0293
Epoch 11/20
75072/75072 [==============================] - 28s 379us/step - loss: 1.0264
Epoch 12/20
75072/75072 [==============================] - 28s 373us/step - loss: 1.0246
Epoch 13/20
75072/75072 [==============================] - 28s 376us/step - loss: 1.0224
Epoch 14/20
75072/75072 [==============================] - 28s 373us/step - loss: 1.0203
Epoch 15/20
75072/75072 [==============================] - 29s 382us/step - loss: 1.0183
Epoch 16/20
75072/75072 [==============================] - 28s 376us/step - loss: 1.0162
Epoch 17/20
75072/75072 [==============================] - 28s 376us/step - loss: 1.0150
Epoch 18/20
75072/75072 [==============================] - 28s 376us/step - loss: 1.0134
Epoch 19/20
75072/75072 [==============================] - 28s 377us/step - loss: 1.0125
Epoch 20/20
75072/75072 [==============================] - 28s 378us/step - loss: 1.0108
</pre>
<p>And now the moment of truth!</p>
<pre><code class="language-python">pprint(get_next_n_chars(model, bs, &apos;for thos&apos;, 400))
</code></pre>
<pre>
(&apos;for those whoever be no longer for their shows that is the basic of the &apos;
 &apos;conseque of the conseque once more proves and the same of the consequent, &apos;
 &apos;and at the other that is the basic of the conseque perfeaced itself to the &apos;
 &apos;sense and self-conseque contemptations of the conseque once still that the &apos;
 &apos;great people take a soul as a profoundination and an artistic as something &apos;
 &apos;might be the most problem and self-co&apos;)
</pre>
<p>Ha, well I wouldn&apos;t quite call it sensible but it&apos;s not super-terrible either. It&apos;s forming mostly complete words, occasionally using punctuation, etc. Not bad for being trained one character at a time. There are many ways that this can be improved of course, but hopefully this has illustrated the key concepts to building a sequence model.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deep Learning With Keras: Recommender Systems]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this post we&apos;ll continue the series on deep learning by using the popular Keras framework to build a recommender system.  This use case is much less common in deep learning literature than things like image classifiers or text generators, but may arguably be an even more common</p>]]></description><link>https://www.johnwittenauer.net/deep-learning-with-keras-recommender-systems/</link><guid isPermaLink="false">5cc10953ca252900bf5cf945</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Mon, 29 Apr 2019 18:36:03 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this post we&apos;ll continue the series on deep learning by using the popular Keras framework to build a recommender system.  This use case is much less common in deep learning literature than things like image classifiers or text generators, but may arguably be an even more common problem.  In fact, as you&apos;ll see below, it&apos;s debatable whether this topic even qualifies as &quot;deep learning&quot; because we&apos;re going to see how to build a pretty good recommender system without using a neural network at all!  We will, however, take advantage of the power of a modern computation framework like Keras to implement the recommender with minimal code.  We&apos;ll try a couple different approaches using a technique called <a href="https://en.wikipedia.org/wiki/Collaborative_filtering?ref=johnwittenauer.net">collaborative filtering</a>. Finally we&apos;ll build a true neural network and see how it compares to the collaborative filtering approach.</p>
<p>The data used for this task is the <a href="http://files.grouplens.org/datasets/movielens/ml-latest-small.zip?ref=johnwittenauer.net">MovieLens</a> data set. As with the previous posts, much of this content is originally based on Jeremy Howard&apos;s excellent <a href="http://www.fast.ai/?ref=johnwittenauer.net">fast.ai lessons</a>.</p>
<p>I&apos;ve already saved the zip file to a local directory so we can get started with some imports and reading in the ratings.csv file, which is where the data for this task comes from.</p>
<pre><code class="language-python">%matplotlib inline
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder

PATH = &apos;/home/paperspace/data/ml-latest-small/&apos;

ratings = pd.read_csv(PATH + &apos;ratings.csv&apos;)
ratings.head()
</code></pre>
<pre>
userId	movieId	rating	timestamp
1	31	2.5	1260759144
1	1029	3.0	1260759179
1	1061	3.0	1260759182
1	1129	2.0	1260759185
1	1172	4.0	1260759205
</pre>
<p>The data is tabular and consists of a user ID, a movie ID, and a rating (there&apos;s also a timestamp but we won&apos;t use it for this task). Our task is to predict the rating for a user/movie pair, with the idea that if we had a model that&apos;s good at this task then we could predict how a user would rate movies they haven&apos;t seen yet and recommend movies with the highest predicted rating.</p>
<p>The zip file also includes a listing of movies and their associated genres. We don&apos;t actually need this for the model but it&apos;s useful to know about.</p>
<pre><code class="language-python">movies = pd.read_csv(PATH + &apos;movies.csv&apos;)
movies.head()
</code></pre>
<pre>

movieId	title	genres
1	Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy
2	Jumanji (1995) Adventure|Children|Fantasy
3	Grumpier Old Men (1995) Comedy|Romance
4	Waiting to Exhale (1995) Comedy|Drama|Romance
5	Father of the Bride Part II (1995) Comedy
</pre>
<p>To get a better sense of what the data looks like, we can turn it into a table by selecting the top 15 users/movies from the data and joining them together. The result shows how each of the top users rated each of the top movies.</p>
<pre><code class="language-python">g = ratings.groupby(&apos;userId&apos;)[&apos;rating&apos;].count()
top_users = g.sort_values(ascending=False)[:15]

g = ratings.groupby(&apos;movieId&apos;)[&apos;rating&apos;].count()
top_movies = g.sort_values(ascending=False)[:15]

top_r = ratings.join(top_users, rsuffix=&apos;_r&apos;, how=&apos;inner&apos;, on=&apos;userId&apos;)
top_r = top_r.join(top_movies, rsuffix=&apos;_r&apos;, how=&apos;inner&apos;, on=&apos;movieId&apos;)

pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
</code></pre>
<pre>
movieId	1	110	260	296	318	356	480	527	589	593	608	1196	1198	1270	2571
userId															
15	2.0	3.0	5.0	5.0	2.0	1.0	3.0	4.0	4.0	5.0	5.0	5.0	4.0	5.0	5.0
30	4.0	5.0	4.0	5.0	5.0	5.0	4.0	5.0	4.0	4.0	5.0	4.0	5.0	5.0	3.0
73	5.0	4.0	4.5	5.0	5.0	5.0	4.0	5.0	3.0	4.5	4.0	5.0	5.0	5.0	4.5
212	3.0	5.0	4.0	4.0	4.5	4.0	3.0	5.0	3.0	4.0	NaN	NaN	3.0	3.0	5.0
213	3.0	2.5	5.0	NaN	NaN	2.0	5.0	NaN	4.0	2.5	2.0	5.0	3.0	3.0	4.0
294	4.0	3.0	4.0	NaN	3.0	4.0	4.0	4.0	3.0	NaN	NaN	4.0	4.5	4.0	4.5
311	3.0	3.0	4.0	3.0	4.5	5.0	4.5	5.0	4.5	2.0	4.0	3.0	4.5	4.5	4.0
380	4.0	5.0	4.0	5.0	4.0	5.0	4.0	NaN	4.0	5.0	4.0	4.0	NaN	3.0	5.0
452	3.5	4.0	4.0	5.0	5.0	4.0	5.0	4.0	4.0	5.0	5.0	4.0	4.0	4.0	2.0
468	4.0	3.0	3.5	3.5	3.5	3.0	2.5	NaN	NaN	3.0	4.0	3.0	3.5	3.0	3.0
509	3.0	5.0	5.0	5.0	4.0	4.0	3.0	5.0	2.0	4.0	4.5	5.0	5.0	3.0	4.5
547	3.5	NaN	NaN	5.0	5.0	2.0	3.0	5.0	NaN	5.0	5.0	2.5	2.0	3.5	3.5
564	4.0	1.0	2.0	5.0	NaN	3.0	5.0	4.0	5.0	5.0	5.0	5.0	5.0	3.0	3.0
580	4.0	4.5	4.0	4.5	4.0	3.5	3.0	4.0	4.5	4.0	4.5	4.0	3.5	3.0	4.5
624	5.0	NaN	5.0	5.0	NaN	3.0	3.0	NaN	3.0	5.0	4.0	5.0	5.0	5.0	2.0
</pre>
<p>To build our first collaborative filtering model, we need to take care of a few things first. The user/movie fields are currently non-sequential integers representing some unique ID for that entity. We need them to be sequential starting at zero to use for modeling (you&apos;ll see why later). We can use scikit-learn&apos;s LabelEncoder class to transform the fields. We&apos;ll also create variables with the total number of unique users and movies in the data, as well as the min and max ratings present in the data, for reasons that will become apparent shortly.</p>
<pre><code class="language-python">user_enc = LabelEncoder()
ratings[&apos;user&apos;] = user_enc.fit_transform(ratings[&apos;userId&apos;].values)
n_users = ratings[&apos;user&apos;].nunique()

item_enc = LabelEncoder()
ratings[&apos;movie&apos;] = item_enc.fit_transform(ratings[&apos;movieId&apos;].values)
n_movies = ratings[&apos;movie&apos;].nunique()

ratings[&apos;rating&apos;] = ratings[&apos;rating&apos;].values.astype(np.float32)
min_rating = min(ratings[&apos;rating&apos;])
max_rating = max(ratings[&apos;rating&apos;])

n_users, n_movies, min_rating, max_rating
</code></pre>
<pre>
(671, 9066, 0.5, 5.0)
</pre>
<p>Create a traditional (X, y) pairing of data and label, then split the data into training and test data sets.</p>
<pre><code class="language-python">X = ratings[[&apos;user&apos;, &apos;movie&apos;]].values
y = ratings[&apos;rating&apos;].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)

X_train.shape, X_test.shape, y_train.shape, y_test.shape
</code></pre>
<pre>
((90003, 2), (10001, 2), (90003,), (10001,))
</pre>
<p>Another constant we&apos;ll need for the model is the number of factors per user/movie. This number can be whatever we want, however for the collaborative filtering model it does need to be the same size for both users and movies. When Jeremy covered this in his class, he said he played around with different numbers and 50 seemed to work best so we&apos;ll go with that.</p>
<p>Finally, we need to turn users and movies into separate arrays in the training and test data. This is because in Keras they&apos;ll each be defined as distinct inputs, and the way Keras works is each input needs to be fed in as its own array.</p>
<pre><code class="language-python">n_factors = 50

X_train_array = [X_train[:, 0], X_train[:, 1]]
X_test_array = [X_test[:, 0], X_test[:, 1]]
</code></pre>
<p>Now we get to the model itself. The main idea here is we&apos;re going to use embeddings to represent each user and each movie in the data. These embeddings will be vectors (of size n_factors) that start out as random numbers but are fit by the model to capture the essential qualities of each user/movie. We can accomplish this by computing the dot product between a user vector and a movie vector to get a predicted rating. The code is fairly simple, there isn&apos;t even a traditional neural network layer or activation involved. I stuck some regularization on the embedding layers and used a different initializer but even that probably isn&apos;t necessary. Notice that this is where we need the number of unique users and movies, since those are required to define the size of each embedding matrix.</p>
<pre><code class="language-python">from keras.models import Model
from keras.layers import Input, Reshape, Dot
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.regularizers import l2

def RecommenderV1(n_users, n_movies, n_factors):
    user = Input(shape=(1,))
    u = Embedding(n_users, n_factors, embeddings_initializer=&apos;he_normal&apos;,
                  embeddings_regularizer=l2(1e-6))(user)
    u = Reshape((n_factors,))(u)
    
    movie = Input(shape=(1,))
    m = Embedding(n_movies, n_factors, embeddings_initializer=&apos;he_normal&apos;,
                  embeddings_regularizer=l2(1e-6))(movie)
    m = Reshape((n_factors,))(m)
    
    x = Dot(axes=1)([u, m])

    model = Model(inputs=[user, movie], outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;mean_squared_error&apos;, optimizer=opt)

    return model
</code></pre>
<p>This is kind of a neat example of how flexible and powerful modern computation frameworks like Keras and PyTorch are. Even though these are billed as deep learning libraries, they have the building blocks to quickly create any computation graph you want and get automatic differentiation essentially for free. Below you can see that all of the parameters are in the embedding layers, we don&apos;t have any traditional neural net components at all.</p>
<pre><code class="language-python">model = RecommenderV1(n_users, n_movies, n_factors)
model.summary()
</code></pre>
<pre>
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
embedding_1 (Embedding)         (None, 1, 50)        33550       input_1[0][0]                    
__________________________________________________________________________________________________
embedding_2 (Embedding)         (None, 1, 50)        453300      input_2[0][0]                    
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 50)           0           embedding_1[0][0]                
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 50)           0           embedding_2[0][0]                
__________________________________________________________________________________________________
dot_1 (Dot)                     (None, 1)            0           reshape_1[0][0]                  
                                                                 reshape_2[0][0]                  
==================================================================================================
Total params: 486,850
Trainable params: 486,850
Non-trainable params: 0
__________________________________________________________________________________________________
</pre>
<p>Let&apos;s go ahead and train this for a few epochs and see what we get.</p>
<pre><code class="language-python">history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5,
                    verbose=1, validation_data=(X_test_array, y_test))
</code></pre>
<pre>
Train on 90003 samples, validate on 10001 samples
Epoch 1/5
90003/90003 [==============================] - 6s 66us/step - loss: 9.7935 - val_loss: 3.4641
Epoch 2/5
90003/90003 [==============================] - 4s 49us/step - loss: 2.0427 - val_loss: 1.6521
Epoch 3/5
90003/90003 [==============================] - 4s 49us/step - loss: 1.1574 - val_loss: 1.3535
Epoch 4/5
90003/90003 [==============================] - 4s 48us/step - loss: 0.9027 - val_loss: 1.2607
Epoch 5/5
90003/90003 [==============================] - 4s 48us/step - loss: 0.7786 - val_loss: 1.2209
</pre>
<p>Not bad for a first try. We can make some improvements though. The first thing we can do is add a &quot;bias&quot; to each embedding. The concept is similar to the bias in a fully-connected layer or the intercept in a linear model. It just provides an extra degree of freedom. We can implement this idea using new embedding layers with a vector length of one. The bias embeddings get added to the result of the dot product.</p>
<p>The second improvement we can make is running the output of the dot product through a sigmoid layer and then scaling the result using the min and max ratings in the data. This is a neat technique that introduces a non-linearity into the output and results in a modest performance bump.</p>
<p>I also refactored the code a bit by pulling out the embedding layer and reshape operation into a separate class.</p>
<pre><code class="language-python">from keras.layers import Add, Activation, Lambda

class EmbeddingLayer:
    def __init__(self, n_items, n_factors):
        self.n_items = n_items
        self.n_factors = n_factors
    
    def __call__(self, x):
        x = Embedding(self.n_items, self.n_factors, embeddings_initializer=&apos;he_normal&apos;,
                      embeddings_regularizer=l2(1e-6))(x)
        x = Reshape((self.n_factors,))(x)
        return x

def RecommenderV2(n_users, n_movies, n_factors, min_rating, max_rating):
    user = Input(shape=(1,))
    u = EmbeddingLayer(n_users, n_factors)(user)
    ub = EmbeddingLayer(n_users, 1)(user)
    
    movie = Input(shape=(1,))
    m = EmbeddingLayer(n_movies, n_factors)(movie)
    mb = EmbeddingLayer(n_movies, 1)(movie)

    x = Dot(axes=1)([u, m])
    x = Add()([x, ub, mb])
    x = Activation(&apos;sigmoid&apos;)(x)
    x = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(x)

    model = Model(inputs=[user, movie], outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;mean_squared_error&apos;, optimizer=opt)

    return model
</code></pre>
<p>The model summary shows the new graph. Notice the additional embedding layers with parameter numbers equal to the unique user and movie counts.</p>
<pre><code class="language-python">model = RecommenderV2(n_users, n_movies, n_factors, min_rating, max_rating)
model.summary()
</code></pre>
<pre>
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_3 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
input_4 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
embedding_3 (Embedding)         (None, 1, 50)        33550       input_3[0][0]                    
__________________________________________________________________________________________________
embedding_5 (Embedding)         (None, 1, 50)        453300      input_4[0][0]                    
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 50)           0           embedding_3[0][0]                
__________________________________________________________________________________________________
reshape_5 (Reshape)             (None, 50)           0           embedding_5[0][0]                
__________________________________________________________________________________________________
embedding_4 (Embedding)         (None, 1, 1)         671         input_3[0][0]                    
__________________________________________________________________________________________________
embedding_6 (Embedding)         (None, 1, 1)         9066        input_4[0][0]                    
__________________________________________________________________________________________________
dot_2 (Dot)                     (None, 1)            0           reshape_3[0][0]                  
                                                                 reshape_5[0][0]                  
__________________________________________________________________________________________________
reshape_4 (Reshape)             (None, 1)            0           embedding_4[0][0]                
__________________________________________________________________________________________________
reshape_6 (Reshape)             (None, 1)            0           embedding_6[0][0]                
__________________________________________________________________________________________________
add_1 (Add)                     (None, 1)            0           dot_2[0][0]                      
                                                                 reshape_4[0][0]                  
                                                                 reshape_6[0][0]                  
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 1)            0           add_1[0][0]                      
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 1)            0           activation_1[0][0]               
==================================================================================================
Total params: 496,587
Trainable params: 496,587
Non-trainable params: 0
__________________________________________________________________________________________________
</pre>
<pre><code class="language-python">history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5,
                    verbose=1, validation_data=(X_test_array, y_test))
</code></pre>
<pre>
Train on 90003 samples, validate on 10001 samples
Epoch 1/5
90003/90003 [==============================] - 6s 64us/step - loss: 1.2850 - val_loss: 0.9083
Epoch 2/5
90003/90003 [==============================] - 5s 57us/step - loss: 0.7445 - val_loss: 0.7801
Epoch 3/5
90003/90003 [==============================] - 5s 57us/step - loss: 0.5615 - val_loss: 0.7646
Epoch 4/5
90003/90003 [==============================] - 5s 57us/step - loss: 0.4273 - val_loss: 0.7669
Epoch 5/5
90003/90003 [==============================] - 5s 58us/step - loss: 0.3298 - val_loss: 0.7823
</pre>
<p>Those two additions to the model resulted in a pretty sizable improvement. Validation error is now down to ~0.76 which is about as good as what Jeremy got (and I believe close to SOTA for this data set).</p>
<p>That pretty much covers the conventional approach to solving this problem, but there&apos;s another way we can tackle this. Instead of taking the dot product of the embedding vectors, what if we just concatenated the embeddings together and stuck a fully-connected layer on top of them? It&apos;s still not technically &quot;deep&quot; but it would at least be a neural network! To modify the code, we can remove the bias embeddings from V2 and do a concat on the embedding layers instead. Then we can add some dropout, insert a dense layer, and stick some dropout on the dense layer as well. Finally, we&apos;ll run it through a single-unit dense layer to keep the sigmoid trick at the end.</p>
<pre><code class="language-python">from keras.layers import Concatenate, Dense, Dropout

def RecommenderNet(n_users, n_movies, n_factors, min_rating, max_rating):
    user = Input(shape=(1,))
    u = EmbeddingLayer(n_users, n_factors)(user)
    
    movie = Input(shape=(1,))
    m = EmbeddingLayer(n_movies, n_factors)(movie)
    
    x = Concatenate()([u, m])
    x = Dropout(0.05)(x)
    
    x = Dense(10, kernel_initializer=&apos;he_normal&apos;)(x)
    x = Activation(&apos;relu&apos;)(x)
    x = Dropout(0.5)(x)
    
    x = Dense(1, kernel_initializer=&apos;he_normal&apos;)(x)
    x = Activation(&apos;sigmoid&apos;)(x)
    x = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(x)

    model = Model(inputs=[user, movie], outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;mean_squared_error&apos;, optimizer=opt)

    return model
</code></pre>
<p>Most of the parameters are still in the embedding layers, but we have some added learning capability from the dense layers.</p>
<pre><code class="language-python">model = RecommenderNet(n_users, n_movies, n_factors, min_rating, max_rating)
model.summary()
</code></pre>
<pre>
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_5 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
input_6 (InputLayer)            (None, 1)            0                                            
__________________________________________________________________________________________________
embedding_7 (Embedding)         (None, 1, 50)        33550       input_5[0][0]                    
__________________________________________________________________________________________________
embedding_8 (Embedding)         (None, 1, 50)        453300      input_6[0][0]                    
__________________________________________________________________________________________________
reshape_7 (Reshape)             (None, 50)           0           embedding_7[0][0]                
__________________________________________________________________________________________________
reshape_8 (Reshape)             (None, 50)           0           embedding_8[0][0]                
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 100)          0           reshape_7[0][0]                  
                                                                 reshape_8[0][0]                  
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 100)          0           concatenate_1[0][0]              
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 10)           1010        dropout_1[0][0]                  
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 10)           0           dense_1[0][0]                    
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 10)           0           activation_2[0][0]               
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            11          dropout_2[0][0]                  
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 1)            0           dense_2[0][0]                    
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 1)            0           activation_3[0][0]               
==================================================================================================
Total params: 487,871
Trainable params: 487,871
Non-trainable params: 0
__________________________________________________________________________________________________
</pre>
<pre><code class="language-python">history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5,
                    verbose=1, validation_data=(X_test_array, y_test))
</code></pre>
<pre>
Train on 90003 samples, validate on 10001 samples
Epoch 1/5
90003/90003 [==============================] - 6s 71us/step - loss: 0.9461 - val_loss: 0.8079
Epoch 2/5
90003/90003 [==============================] - 6s 64us/step - loss: 0.8097 - val_loss: 0.7898
Epoch 3/5
90003/90003 [==============================] - 6s 63us/step - loss: 0.7781 - val_loss: 0.7855
Epoch 4/5
90003/90003 [==============================] - 6s 64us/step - loss: 0.7617 - val_loss: 0.7820
Epoch 5/5
90003/90003 [==============================] - 6s 63us/step - loss: 0.7513 - val_loss: 0.7858
</pre>
<p>Without doing any tuning at all we still managed to get a result that&apos;s pretty close to the best performance we saw with the traditional approach. This technique has the added benefit that we can easily incorporate additional features into the model. For instance, we could create some date features from the timestamp or throw in the movie genres as a new embedding layer. We could tune the size of the movie and user embeddings independently since they no longer need to match. Lots of possibilities here.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deep Learning With Keras: Convolutional Networks]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In my last post, I kicked off a series on deep learning by showing how to apply several core neural network concepts such as dense layers, embeddings, and regularization to build models using structured and/or time-series data.  In this post we&apos;ll see how to build models using</p>]]></description><link>https://www.johnwittenauer.net/deep-learning-with-keras-convolutional-networks/</link><guid isPermaLink="false">5c3113624d104a00bfc50ea1</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Tue, 08 Jan 2019 01:02:17 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In my last post, I kicked off a series on deep learning by showing how to apply several core neural network concepts such as dense layers, embeddings, and regularization to build models using structured and/or time-series data.  In this post we&apos;ll see how to build models using another core component in modern deep learning: convolutions.  Convolutional layers are primarily used in image-based models but have some interesting properties that make them useful for sequential data as well.  The biggest wrinkle that convolutional layers introduce is an element of locality.  They have a receptive field that consists of some subset of the input data.  In essence, each convolution can only &quot;see&quot; part of the image, sequence etc. that it&apos;s being trained on.</p>
<p>I&apos;m not going to cover convolutional layers in-depth here, there are tons of great resources out there already to learn about them. If you&apos;re new to the concept, I would recommend <a href="https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/?ref=johnwittenauer.net">this blog series</a> as a starting point or just do some googling for explainers.  There&apos;s a lot of good content that comes up.</p>
<p>We&apos;ll start with a simple dense network and gradually improve it until we&apos;re getting pretty good results classifying images in the CIFAR 10 data set.  We&apos;ll then see how we can avoid building a network from scratch by taking a large, pre-trained net and fine-tuning it to a custom domain.  As with my first post in this series, much of this content is originally based on Jeremy Howard&apos;s <a href="http://www.fast.ai/?ref=johnwittenauer.net">fast.ai lessons</a>.  I&apos;ve combined content from a few different lessons and converted code to use Keras instead of PyTorch.</p>
<p>Since Keras comes with a pre-built data loader for CIFAR 10, we can just use that to get started instead of worrying about locating and importing the data.</p>
<pre><code class="language-python">%matplotlib inline
import matplotlib.pyplot as plt
from keras.datasets import cifar10

(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train.shape, y_train.shape, x_test.shape, y_test.shape
</code></pre>
<pre>
((50000, 32, 32, 3), (50000, 1), (10000, 32, 32, 3), (10000, 1))
</pre>
<p>Plot a few of the images to get an idea what they look like and confirm that the data loaded correctly. You&apos;ll quickly notice the CIFAR 10 images are very low resolution (32 x 32 images with 3 color channels). This makes training from scratch quite feasible even on modest compute resources.</p>
<pre><code class="language-python">def plot_image(index):
    image = x_train[index, :, :, :]
    plt.imshow(image)

plot_image(4)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2019/01/conv1.png" alt loading="lazy"></p>
<pre><code class="language-python">plot_image(6)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2019/01/conv2.png" alt="conv2" loading="lazy"></p>
<p>We need to do a data conversion to get the class labels in one-hot encoded format. This will allow us to use a softmax activation and categorical cross-entopy loss in our network. CIFAR 10 only has 10 distinct classes so this is fairly straightforward.</p>
<pre><code class="language-python">import keras

y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

y_train[0]
</code></pre>
<pre>
array([0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], dtype=float32)
</pre>
<p>The only other pre-processing step to apply is normalizing the input data. Since everything is an RGB value, we can keep it simple and just divide by 255.</p>
<pre><code class="language-python">x_train = x_train.astype(&apos;float32&apos;)
x_test = x_test.astype(&apos;float32&apos;)
x_train /= 255
x_test /= 255
</code></pre>
<p>Define a few useful configuration items to use throughout the exercise. The input shape variable will have a value of (32, 32, 3) corresponding to the shape of the array for each image.</p>
<pre><code class="language-python">in_shape = x_train.shape[1:]
batch_size = 256
n_classes = 10
lr = 0.01
</code></pre>
<p>Now we can get started with the actual modeling part. For a first attempt, let&apos;s do the simplest and most naive model possible. We&apos;ll just create a straightforward fully-connected model and stick a softmax activation on at the end.</p>
<pre><code class="language-python">from keras.models import Model
from keras.layers import Activation, Dense, Flatten, Input
from keras.optimizers import Adam

def SimpleNet(in_shape, layers, n_classes, lr):
    i = Input(shape=in_shape)
    x = Flatten()(i)
    
    for n in range(len(layers)):
        x = Dense(layers[n])(x)
        x = Activation(&apos;relu&apos;)(x)
    
    x = Dense(n_classes)(x)
    x = Activation(&apos;softmax&apos;)(x)
    
    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=lr)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt, metrics=[&apos;accuracy&apos;])
    
    return model
</code></pre>
<p>Note that the architecture is somewhat flexible in that we can define as many dense layers as we want by just passing in a list of numbers to the &quot;layers&quot; parameter (where the numbers correspond to the size of the layer). In this case we&apos;re only going to use one layer, but this capability will be very useful later on.</p>
<pre><code class="language-python">model = SimpleNet(in_shape, [40], n_classes, lr)
model.summary()
</code></pre>
<pre>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 32, 32, 3)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 3072)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 40)                122920    
_________________________________________________________________
activation_1 (Activation)    (None, 40)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                410       
_________________________________________________________________
activation_2 (Activation)    (None, 10)                0         
=================================================================
Total params: 123,330
Trainable params: 123,330
Non-trainable params: 0
_________________________________________________________________
</pre>
<p>Our last step before training is to define an image data generator. We could just train on the images as-is, but randomly applying transformations to the images will make the classifier more robust. Keras has a utility class built in for just this purpose, so we can use that to randomly shift or flip the direction of the images during training.</p>
<pre><code class="language-python">from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True)
</code></pre>
<p>Let&apos;s try training for 10 epochs and see what happens!</p>
<pre><code class="language-python">model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
                    epochs=10, validation_data=(x_test, y_test), workers=4)
</code></pre>
<pre>
Epoch 1/10
196/196 [==============================] - 26s 134ms/step - loss: 2.4531 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 2/10
196/196 [==============================] - 23s 120ms/step - loss: 2.2576 - acc: 0.1268 - val_loss: 2.1575 - val_acc: 0.1623
Epoch 3/10
196/196 [==============================] - 24s 120ms/step - loss: 2.1256 - acc: 0.1741 - val_loss: 2.0836 - val_acc: 0.1764
Epoch 4/10
196/196 [==============================] - 24s 122ms/step - loss: 2.1123 - acc: 0.1760 - val_loss: 2.0775 - val_acc: 0.1972
Epoch 5/10
196/196 [==============================] - 23s 119ms/step - loss: 2.0938 - acc: 0.1802 - val_loss: 2.0716 - val_acc: 0.1710
Epoch 6/10
196/196 [==============================] - 24s 120ms/step - loss: 2.0940 - acc: 0.1784 - val_loss: 2.0660 - val_acc: 0.1875
Epoch 7/10
196/196 [==============================] - 24s 121ms/step - loss: 2.0894 - acc: 0.1822 - val_loss: 2.1032 - val_acc: 0.1765
Epoch 8/10
196/196 [==============================] - 24s 121ms/step - loss: 2.0954 - acc: 0.1799 - val_loss: 2.0751 - val_acc: 0.1745
Epoch 9/10
196/196 [==============================] - 23s 120ms/step - loss: 2.0853 - acc: 0.1788 - val_loss: 2.0702 - val_acc: 0.1743
Epoch 10/10
196/196 [==============================] - 23s 120ms/step - loss: 2.0889 - acc: 0.1775 - val_loss: 2.0659 - val_acc: 0.1844
</pre>
<p>Clearly the naive approach is not very effective. The model is basically doing a bit better than randomly guessing. Let&apos;s replace the dense layer with a few convolutional layers instead. For our first attempt at using convolutions, we&apos;ll use a kernel size of 3 and a stride of 2 (rather than use pooling layers in between the conv layers) and a global max pooling layer to condense the output shape before going through the softmax.</p>
<pre><code class="language-python">from keras.layers import Conv2D, GlobalMaxPooling2D

def ConvNet(in_shape, layers, n_classes, lr):
    i = Input(shape=in_shape)
    
    for n in range(len(layers)):
        if n == 0:
            x = Conv2D(layers[n], kernel_size=3, strides=2)(i)
        else:
            x = Conv2D(layers[n], kernel_size=3, strides=2)(x)
        x = Activation(&apos;relu&apos;)(x)
    
    x = GlobalMaxPooling2D()(x)
    x = Dense(n_classes)(x)
    x = Activation(&apos;softmax&apos;)(x)
    
    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=lr)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt, metrics=[&apos;accuracy&apos;])
    
    return model
</code></pre>
<p>This time let&apos;s try using 3 conv layers with an increasing number of filters in each layer.</p>
<pre><code class="language-python">model = ConvNet(in_shape, [20, 40, 80], n_classes, lr)
model.summary()
</code></pre>
<pre>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, 32, 32, 3)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 15, 15, 20)        560       
_________________________________________________________________
activation_3 (Activation)    (None, 15, 15, 20)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 40)          7240      
_________________________________________________________________
activation_4 (Activation)    (None, 7, 7, 40)          0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 3, 3, 80)          28880     
_________________________________________________________________
activation_5 (Activation)    (None, 3, 3, 80)          0         
_________________________________________________________________
global_max_pooling2d_1 (Glob (None, 80)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 10)                810       
_________________________________________________________________
activation_6 (Activation)    (None, 10)                0         
=================================================================
Total params: 37,490
Trainable params: 37,490
Non-trainable params: 0
_________________________________________________________________
</pre>
<p>It&apos;s worth checking your intuition and understanding of what&apos;s going on by looking at the summary output and verifying that the numbers make sense. For instance, why does the first convolutional layer have 560 parameters? Where does that come from? Well, we have a kernel size of 3 which creates a 3 x 3 filter (i.e. 9 parameters), but we also have different color channels for a depth of 3, so each filter is really 3 x 3 x 3 = 27 parameters, plus 1 for the bias so 28 per filter. We specified 20 filters in the first layer, so 28 X 20 = 560. Try applying similar logic to the second conv layer and see if the result makes sense.</p>
<p>Now that we&apos;ve got a model, let&apos;s try training it using the exact same approach as before.</p>
<pre><code class="language-python">model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
                    epochs=10, validation_data=(x_test, y_test), workers=4)
</code></pre>
<pre>
Epoch 1/10
196/196 [==============================] - 25s 127ms/step - loss: 1.8725 - acc: 0.3019 - val_loss: 1.7737 - val_acc: 0.3772
Epoch 2/10
196/196 [==============================] - 24s 120ms/step - loss: 1.6342 - acc: 0.4015 - val_loss: 1.5930 - val_acc: 0.4314
Epoch 3/10
196/196 [==============================] - 24s 120ms/step - loss: 1.5503 - acc: 0.4349 - val_loss: 1.5013 - val_acc: 0.4567
Epoch 4/10
196/196 [==============================] - 24s 122ms/step - loss: 1.4848 - acc: 0.4623 - val_loss: 1.4356 - val_acc: 0.4801
Epoch 5/10
196/196 [==============================] - 24s 122ms/step - loss: 1.4493 - acc: 0.4798 - val_loss: 1.3845 - val_acc: 0.4972
Epoch 6/10
196/196 [==============================] - 23s 119ms/step - loss: 1.4186 - acc: 0.4892 - val_loss: 1.3761 - val_acc: 0.5066
Epoch 7/10
196/196 [==============================] - 24s 121ms/step - loss: 1.3999 - acc: 0.4956 - val_loss: 1.3681 - val_acc: 0.5024
Epoch 8/10
196/196 [==============================] - 24s 121ms/step - loss: 1.3837 - acc: 0.5047 - val_loss: 1.4632 - val_acc: 0.4810
Epoch 9/10
196/196 [==============================] - 23s 120ms/step - loss: 1.3838 - acc: 0.5006 - val_loss: 1.3647 - val_acc: 0.5139
Epoch 10/10
196/196 [==============================] - 24s 120ms/step - loss: 1.3565 - acc: 0.5114 - val_loss: 1.3553 - val_acc: 0.5162
</pre>
<p>The results are a lot different this time! The model is clearly learning and after 10 epochs we&apos;re at about 50% accuracy on the validation set. Still, we should be able to do a lot better. For the next attempt let&apos;s introduce a few new wrinkles. First, we&apos;re going to add batch normalization after each conv layer. Second, we&apos;re going to add a single conv layer at the beginning with a larger kernel size and a stride of 1 so we don&apos;t reduce the receptive field. Third, we&apos;re going to introduce padding which will modify the shape of each conv layer output. Finally, we&apos;re going to add a few more layers to make the model bigger.</p>
<p>To make the model definition more modular, I&apos;ve pulled out the conv layer into a separate class. There are multiple ways to do this (a function would have worked just as well) but I opted to mimic the way Keras&apos;s functional API works.</p>
<pre><code class="language-python">from keras.layers import BatchNormalization

class ConvLayer:
    def __init__(self, filters, kernel_size, stride):
        self.filters = filters
        self.kernel_size = kernel_size
        self.stride = stride

    def __call__(self, x):
        x = Conv2D(self.filters, kernel_size=self.kernel_size,
                   strides=self.stride, padding=&apos;same&apos;, use_bias=False)(x)
        x = Activation(&apos;relu&apos;)(x)
        x = BatchNormalization()(x)
        return x

def ConvNet2(in_shape, layers, n_classes, lr):
    i = Input(shape=in_shape)
    
    x = Conv2D(layers[0], kernel_size=5, strides=1, padding=&apos;same&apos;)(i)
    x = Activation(&apos;relu&apos;)(x)
    
    for n in range(1, len(layers)):
        x = ConvLayer(layers[n], kernel_size=3, stride=2)(x)

    x = GlobalMaxPooling2D()(x)
    x = Dense(n_classes)(x)
    x = Activation(&apos;softmax&apos;)(x)
    
    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=lr)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt, metrics=[&apos;accuracy&apos;])
    
    return model
</code></pre>
<pre><code class="language-python">model = ConvNet2(in_shape, [10, 20, 40, 80, 160], n_classes, lr)
model.summary()
</code></pre>
<pre>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         (None, 32, 32, 3)         0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 32, 32, 10)        760       
_________________________________________________________________
activation_7 (Activation)    (None, 32, 32, 10)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 16, 16, 20)        1800      
_________________________________________________________________
activation_8 (Activation)    (None, 16, 16, 20)        0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 16, 16, 20)        80        
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 8, 8, 40)          7200      
_________________________________________________________________
activation_9 (Activation)    (None, 8, 8, 40)          0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 8, 8, 40)          160       
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 4, 4, 80)          28800     
_________________________________________________________________
activation_10 (Activation)   (None, 4, 4, 80)          0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 4, 4, 80)          320       
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 2, 2, 160)         115200    
_________________________________________________________________
activation_11 (Activation)   (None, 2, 2, 160)         0         
_________________________________________________________________
batch_normalization_4 (Batch (None, 2, 2, 160)         640       
_________________________________________________________________
global_max_pooling2d_2 (Glob (None, 160)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 10)                1610      
_________________________________________________________________
activation_12 (Activation)   (None, 10)                0         
=================================================================
Total params: 156,570
Trainable params: 155,970
Non-trainable params: 600
_________________________________________________________________
</pre>
<p>We made a bunch of improvements and the network has a much larger capacity, so let&apos;s see what it does.</p>
<pre><code class="language-python">model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
                    epochs=10, validation_data=(x_test, y_test), workers=4)
</code></pre>
<pre>
Epoch 1/10
196/196 [==============================] - 24s 125ms/step - loss: 1.6451 - acc: 0.4258 - val_loss: 1.5408 - val_acc: 0.4597
Epoch 2/10
196/196 [==============================] - 23s 118ms/step - loss: 1.3130 - acc: 0.5280 - val_loss: 1.7158 - val_acc: 0.4559
Epoch 3/10
196/196 [==============================] - 24s 121ms/step - loss: 1.1669 - acc: 0.5803 - val_loss: 1.5101 - val_acc: 0.5311
Epoch 4/10
196/196 [==============================] - 23s 119ms/step - loss: 1.0642 - acc: 0.6205 - val_loss: 1.3304 - val_acc: 0.5538
Epoch 5/10
196/196 [==============================] - 23s 118ms/step - loss: 0.9887 - acc: 0.6485 - val_loss: 1.2749 - val_acc: 0.5955
Epoch 6/10
196/196 [==============================] - 23s 119ms/step - loss: 0.9264 - acc: 0.6717 - val_loss: 1.3210 - val_acc: 0.5819
Epoch 7/10
196/196 [==============================] - 23s 120ms/step - loss: 0.8812 - acc: 0.6887 - val_loss: 0.9221 - val_acc: 0.6807
Epoch 8/10
196/196 [==============================] - 23s 120ms/step - loss: 0.8437 - acc: 0.6985 - val_loss: 0.8809 - val_acc: 0.7012
Epoch 9/10
196/196 [==============================] - 24s 120ms/step - loss: 0.8196 - acc: 0.7083 - val_loss: 0.9064 - val_acc: 0.6873
Epoch 10/10
196/196 [==============================] - 24s 120ms/step - loss: 0.7897 - acc: 0.7194 - val_loss: 0.8259 - val_acc: 0.7179
</pre>
<p>That&apos;s a significant improvement! Our validation accuracy after 10 epochs jumped all the way from ~50% to ~70%. We&apos;re already doing pretty good, but there&apos;s one more major addition we can make that should bump performance even higher. A key addition to modern convolutional networks was the invention of <a href="https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035?ref=johnwittenauer.net">residual layers</a>, which introduce an &quot;identity&quot; connection to the output of a block of convolutions. Below I&apos;ve added a new &quot;ResLayer&quot; class that inherits from &quot;ConvLayer&quot; but outputs the addition of the original input with the output from the conv layer. Building on the previous network, we&apos;ve now added two residual layers to each &quot;block&quot; in the model definition. These residual layers have a stride of 1 so they don&apos;t change the shape of the output. Finally, we&apos;ve added a bit of regularization to keep the model from overfitting too badly.</p>
<pre><code class="language-python">from keras import layers
from keras import regularizers
from keras.layers import Dropout

class ConvLayer:
    def __init__(self, filters, kernel_size, stride):
        self.filters = filters
        self.kernel_size = kernel_size
        self.stride = stride

    def __call__(self, x):
        x = Conv2D(self.filters, kernel_size=self.kernel_size,
                   strides=self.stride, padding=&apos;same&apos;, use_bias=False,
                   kernel_regularizer=regularizers.l2(1e-6))(x)
        x = Activation(&apos;relu&apos;)(x)
        x = BatchNormalization()(x)
        return x

class ResLayer(ConvLayer):
    def __call__(self, x):
        return layers.add([x, super().__call__(x)])

def ResNet(in_shape, layers, n_classes, lr):
    i = Input(shape=in_shape)
    
    x = Conv2D(layers[0], kernel_size=7, strides=1, padding=&apos;same&apos;)(i)
    x = Activation(&apos;relu&apos;)(x)

    for n in range(1, len(layers)):
        x = ConvLayer(layers[n], kernel_size=3, stride=2)(x)
        x = ResLayer(layers[n], kernel_size=3, stride=1)(x)
        x = ResLayer(layers[n], kernel_size=3, stride=1)(x)

    x = GlobalMaxPooling2D()(x)
    x = Dropout(0.1)(x)
    x = Dense(n_classes)(x)
    x = Activation(&apos;softmax&apos;)(x)
    
    model = Model(inputs=i, outputs=x)
    opt = Adam(lr=lr)
    model.compile(loss=&apos;categorical_crossentropy&apos;, optimizer=opt, metrics=[&apos;accuracy&apos;])
    
    return model
</code></pre>
<pre><code class="language-python">model = ResNet(in_shape, [10, 20, 40, 80, 160], n_classes, lr)
model.summary()
</code></pre>
<pre>
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_4 (InputLayer)            (None, 32, 32, 3)    0                                            
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 32, 32, 10)   1480        input_4[0][0]                    
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 32, 32, 10)   0           conv2d_9[0][0]                   
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 16, 16, 20)   1800        activation_13[0][0]              
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 16, 16, 20)   0           conv2d_10[0][0]                  
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 16, 16, 20)   80          activation_14[0][0]              
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 16, 16, 20)   3600        batch_normalization_5[0][0]      
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 16, 16, 20)   0           conv2d_11[0][0]                  
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 20)   80          activation_15[0][0]              
__________________________________________________________________________________________________
add_1 (Add)                     (None, 16, 16, 20)   0           batch_normalization_5[0][0]      
                                                                 batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 16, 16, 20)   3600        add_1[0][0]                      
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 16, 16, 20)   0           conv2d_12[0][0]                  
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 16, 16, 20)   80          activation_16[0][0]              
__________________________________________________________________________________________________
add_2 (Add)                     (None, 16, 16, 20)   0           add_1[0][0]                      
                                                                 batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 8, 8, 40)     7200        add_2[0][0]                      
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 8, 8, 40)     0           conv2d_13[0][0]                  
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 8, 8, 40)     160         activation_17[0][0]              
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 8, 8, 40)     14400       batch_normalization_8[0][0]      
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 8, 8, 40)     0           conv2d_14[0][0]                  
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 8, 8, 40)     160         activation_18[0][0]              
__________________________________________________________________________________________________
add_3 (Add)                     (None, 8, 8, 40)     0           batch_normalization_8[0][0]      
                                                                 batch_normalization_9[0][0]      
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 8, 8, 40)     14400       add_3[0][0]                      
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 8, 8, 40)     0           conv2d_15[0][0]                  
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 8, 8, 40)     160         activation_19[0][0]              
__________________________________________________________________________________________________
add_4 (Add)                     (None, 8, 8, 40)     0           add_3[0][0]                      
                                                                 batch_normalization_10[0][0]     
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 4, 4, 80)     28800       add_4[0][0]                      
__________________________________________________________________________________________________
activation_20 (Activation)      (None, 4, 4, 80)     0           conv2d_16[0][0]                  
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 4, 4, 80)     320         activation_20[0][0]              
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 4, 4, 80)     57600       batch_normalization_11[0][0]     
__________________________________________________________________________________________________
activation_21 (Activation)      (None, 4, 4, 80)     0           conv2d_17[0][0]                  
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 4, 4, 80)     320         activation_21[0][0]              
__________________________________________________________________________________________________
add_5 (Add)                     (None, 4, 4, 80)     0           batch_normalization_11[0][0]     
                                                                 batch_normalization_12[0][0]     
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 4, 4, 80)     57600       add_5[0][0]                      
__________________________________________________________________________________________________
activation_22 (Activation)      (None, 4, 4, 80)     0           conv2d_18[0][0]                  
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 4, 4, 80)     320         activation_22[0][0]              
__________________________________________________________________________________________________
add_6 (Add)                     (None, 4, 4, 80)     0           add_5[0][0]                      
                                                                 batch_normalization_13[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 2, 2, 160)    115200      add_6[0][0]                      
__________________________________________________________________________________________________
activation_23 (Activation)      (None, 2, 2, 160)    0           conv2d_19[0][0]                  
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 2, 2, 160)    640         activation_23[0][0]              
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 2, 2, 160)    230400      batch_normalization_14[0][0]     
__________________________________________________________________________________________________
activation_24 (Activation)      (None, 2, 2, 160)    0           conv2d_20[0][0]                  
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 2, 2, 160)    640         activation_24[0][0]              
__________________________________________________________________________________________________
add_7 (Add)                     (None, 2, 2, 160)    0           batch_normalization_14[0][0]     
                                                                 batch_normalization_15[0][0]     
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 2, 2, 160)    230400      add_7[0][0]                      
__________________________________________________________________________________________________
activation_25 (Activation)      (None, 2, 2, 160)    0           conv2d_21[0][0]                  
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 2, 2, 160)    640         activation_25[0][0]              
__________________________________________________________________________________________________
add_8 (Add)                     (None, 2, 2, 160)    0           add_7[0][0]                      
                                                                 batch_normalization_16[0][0]     
__________________________________________________________________________________________________
global_max_pooling2d_3 (GlobalM (None, 160)          0           add_8[0][0]                      
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 160)          0           global_max_pooling2d_3[0][0]     
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 10)           1610        dropout_1[0][0]                  
__________________________________________________________________________________________________
activation_26 (Activation)      (None, 10)           0           dense_5[0][0]                    
==================================================================================================
Total params: 771,690
Trainable params: 769,890
Non-trainable params: 1,800
__________________________________________________________________________________________________
</pre>
<p>The model summary is now getting quite large, but you can still follow through each layer and make sense of what&apos;s happening. Let&apos;s run this one last time and see what the results look like. We&apos;ll increase the epoch count since deeper networks tend to take longer to train.</p>
<pre><code class="language-python">model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size),
                    epochs=40, validation_data=(x_test, y_test), workers=4)
</code></pre>
<pre>
Epoch 1/40
196/196 [==============================] - 28s 145ms/step - loss: 1.9806 - acc: 0.3498 - val_loss: 7.4266 - val_acc: 0.0771
Epoch 2/40
196/196 [==============================] - 23s 118ms/step - loss: 1.5761 - acc: 0.4484 - val_loss: 2.0037 - val_acc: 0.3478
Epoch 3/40
196/196 [==============================] - 24s 124ms/step - loss: 1.5488 - acc: 0.4612 - val_loss: 14.3443 - val_acc: 0.1005
Epoch 4/40
196/196 [==============================] - 24s 122ms/step - loss: 1.6194 - acc: 0.4359 - val_loss: 2.5182 - val_acc: 0.2401
Epoch 5/40
196/196 [==============================] - 24s 121ms/step - loss: 1.5562 - acc: 0.4626 - val_loss: 2.0495 - val_acc: 0.3302
Epoch 6/40
196/196 [==============================] - 24s 121ms/step - loss: 1.6183 - acc: 0.4400 - val_loss: 2.9989 - val_acc: 0.1782
Epoch 7/40
196/196 [==============================] - 24s 121ms/step - loss: 1.4886 - acc: 0.4672 - val_loss: 1.3995 - val_acc: 0.4944
Epoch 8/40
196/196 [==============================] - 24s 121ms/step - loss: 1.3551 - acc: 0.5162 - val_loss: 1.3086 - val_acc: 0.5268
Epoch 9/40
196/196 [==============================] - 24s 123ms/step - loss: 1.2971 - acc: 0.5373 - val_loss: 1.2979 - val_acc: 0.5423
Epoch 10/40
196/196 [==============================] - 24s 121ms/step - loss: 1.2737 - acc: 0.5507 - val_loss: 8.2801 - val_acc: 0.1325
Epoch 11/40
196/196 [==============================] - 24s 123ms/step - loss: 1.3697 - acc: 0.5350 - val_loss: 1.2361 - val_acc: 0.5742
Epoch 12/40
196/196 [==============================] - 24s 121ms/step - loss: 1.2410 - acc: 0.5652 - val_loss: 1.1365 - val_acc: 0.6007
Epoch 13/40
196/196 [==============================] - 24s 121ms/step - loss: 1.1514 - acc: 0.5958 - val_loss: 1.1343 - val_acc: 0.6118
Epoch 14/40
196/196 [==============================] - 24s 122ms/step - loss: 1.1079 - acc: 0.6096 - val_loss: 1.1276 - val_acc: 0.6092
Epoch 15/40
196/196 [==============================] - 24s 121ms/step - loss: 1.0586 - acc: 0.6306 - val_loss: 1.0696 - val_acc: 0.6330
Epoch 16/40
196/196 [==============================] - 23s 119ms/step - loss: 1.0240 - acc: 0.6437 - val_loss: 1.0270 - val_acc: 0.6596
Epoch 17/40
196/196 [==============================] - 24s 122ms/step - loss: 0.9809 - acc: 0.6611 - val_loss: 1.0828 - val_acc: 0.6391
Epoch 18/40
196/196 [==============================] - 24s 121ms/step - loss: 0.9591 - acc: 0.6685 - val_loss: 0.9332 - val_acc: 0.6848
Epoch 19/40
196/196 [==============================] - 24s 122ms/step - loss: 0.9166 - acc: 0.6860 - val_loss: 0.9894 - val_acc: 0.6632
Epoch 20/40
196/196 [==============================] - 24s 121ms/step - loss: 0.8854 - acc: 0.6983 - val_loss: 1.1848 - val_acc: 0.6169
Epoch 21/40
196/196 [==============================] - 24s 122ms/step - loss: 0.8659 - acc: 0.7045 - val_loss: 0.9105 - val_acc: 0.6978
Epoch 22/40
196/196 [==============================] - 24s 122ms/step - loss: 0.8366 - acc: 0.7162 - val_loss: 0.8779 - val_acc: 0.7132
Epoch 23/40
196/196 [==============================] - 23s 120ms/step - loss: 0.8175 - acc: 0.7252 - val_loss: 1.8874 - val_acc: 0.5708
Epoch 24/40
196/196 [==============================] - 24s 120ms/step - loss: 0.8383 - acc: 0.7203 - val_loss: 0.9611 - val_acc: 0.6878
Epoch 25/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7910 - acc: 0.7360 - val_loss: 0.8956 - val_acc: 0.7037
Epoch 26/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7728 - acc: 0.7445 - val_loss: 0.8712 - val_acc: 0.7297
Epoch 27/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7532 - acc: 0.7514 - val_loss: 0.8697 - val_acc: 0.7191
Epoch 28/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7419 - acc: 0.7568 - val_loss: 0.7995 - val_acc: 0.7405
Epoch 29/40
196/196 [==============================] - 24s 122ms/step - loss: 0.7385 - acc: 0.7599 - val_loss: 0.8080 - val_acc: 0.7451
Epoch 30/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7202 - acc: 0.7663 - val_loss: 0.9121 - val_acc: 0.7253
Epoch 31/40
196/196 [==============================] - 24s 121ms/step - loss: 0.7078 - acc: 0.7737 - val_loss: 0.8999 - val_acc: 0.7223
Epoch 32/40
196/196 [==============================] - 24s 120ms/step - loss: 0.6969 - acc: 0.7756 - val_loss: 0.9682 - val_acc: 0.7135
Epoch 33/40
196/196 [==============================] - 24s 121ms/step - loss: 0.6851 - acc: 0.7825 - val_loss: 0.8145 - val_acc: 0.7456
Epoch 34/40
196/196 [==============================] - 23s 119ms/step - loss: 0.6800 - acc: 0.7859 - val_loss: 0.7972 - val_acc: 0.7585
Epoch 35/40
196/196 [==============================] - 23s 118ms/step - loss: 0.6689 - acc: 0.7919 - val_loss: 0.7807 - val_acc: 0.7654
Epoch 36/40
196/196 [==============================] - 24s 122ms/step - loss: 0.6626 - acc: 0.7949 - val_loss: 0.8022 - val_acc: 0.7509
Epoch 37/40
196/196 [==============================] - 23s 119ms/step - loss: 0.6550 - acc: 0.7987 - val_loss: 0.8129 - val_acc: 0.7613
Epoch 38/40
196/196 [==============================] - 24s 122ms/step - loss: 0.6532 - acc: 0.8006 - val_loss: 0.8861 - val_acc: 0.7359
Epoch 39/40
196/196 [==============================] - 23s 119ms/step - loss: 0.6419 - acc: 0.8043 - val_loss: 0.8233 - val_acc: 0.7568
Epoch 40/40
196/196 [==============================] - 24s 124ms/step - loss: 0.6308 - acc: 0.8109 - val_loss: 0.7809 - val_acc: 0.7670
</pre>
<p>The results look pretty good. We&apos;re starting to hit the point where accuracy improvements are getting harder to come by. It&apos;s definitely possible to keep improving the model with the right tuning and augmentation strategies, however diminishing returns start to kick in relative to the effort involved. Also, as the network keeps getting bigger (and as we graduate to larger and more complex data sets) it starts becoming much, much harder to build a network from scratch.</p>
<p>Fortunately there&apos;s an alternative solution via <a href="https://machinelearningmastery.com/transfer-learning-for-deep-learning/?ref=johnwittenauer.net">transfer learning</a>, which takes a model trained on one task and adapts it to another task. Combined with pre-training, which is the practice of using a model that&apos;s already been trained for a given task, we can take very large networks developed by i.e. Google and Facebook and then fine-tune them to work in a custom domain of our choosing. Below I&apos;ll walk through an example of how this works by using a pre-trained ImageNet model and adapting it to Kaggle&apos;s <a href="https://www.kaggle.com/c/dogs-vs-cats?ref=johnwittenauer.net">dogs vs cats</a> data set.</p>
<p>First get some imports out of the way. We&apos;ll need all of this stuff throughout the exercise.</p>
<pre><code class="language-python">import numpy as np
from keras.applications import ResNet50
from keras.applications.resnet50 import preprocess_input
from keras.layers import Dense, GlobalAveragePooling2D
from keras.models import Model
from keras.optimizers import RMSprop
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
</code></pre>
<p>The easiest way to get the data set is via fast.ai&apos;s servers, where they&apos;ve graciously hosted a <a href="http://files.fast.ai/data/dogscats.zip?ref=johnwittenauer.net">single zip file</a> with everything we need. Extract this to a directory somewhere on your machine and update the &quot;PATH&quot; variable below, and you should be good to go. We can also specify a few useful constants such as the image dimension and batch size.</p>
<pre><code class="language-python">PATH = &apos;/home/paperspace/data/dogscats/&apos;
train_dir = f&apos;{PATH}train&apos;
valid_dir = f&apos;{PATH}valid&apos;
size = 224
batch_size = 64
</code></pre>
<p>Next we need a generator to apply transformations to the images. As before, we can use the generator Keras has built-in. The only wrinkle is using a specalized preprocessing function designed for ImageNet-like source data (this also comes with Keras and was imported above).</p>
<pre><code class="language-python">train_datagen = ImageDataGenerator(
    shear_range=0.2,
    zoom_range=0.2,
    preprocessing_function=preprocess_input,
    horizontal_flip=True)

val_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
</code></pre>
<p>With CIFAR 10 we had the whole data set loaded into memory, but that strategy usually isn&apos;t feasible for larger image databases. In this case we have a bunch of image files in folders on disk as our starting point, and to run a model over these images we want to be able to stream images into memory in batches rather than load everything at once. Fortunately Keras can also handle this scenario natively using the &quot;flow_from_directory&quot; function. We just need to specify the directory, image size, and batch size.</p>
<pre><code class="language-python">train_generator = train_datagen.flow_from_directory(train_dir,
    target_size=(size, size),
    batch_size=batch_size, class_mode=&apos;binary&apos;)

val_generator = val_datagen.flow_from_directory(valid_dir,
    shuffle=False,
    target_size=(size, size),
    batch_size=batch_size, class_mode=&apos;binary&apos;)
</code></pre>
<pre>
Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
</pre>
<p>For the model, we&apos;ll use the ResNet-50 architecture with pre-trained weights. ResNet-50 is a 168-layer architecture that achieved 92% top-5 accuracy on ImageNet classification. Keras provides both the model architecture and an option to use existing weights out of the box. The other notable parameter in the model initializer is &quot;include_top&quot;, which indicates if we want to include the fully-connected layer at the top of the network. In our case the answer is no, because we want to &quot;hook into&quot; the model after the last residual block and add our own architecture on top.</p>
<pre><code class="language-python">base_model = ResNet50(weights=&apos;imagenet&apos;, include_top=False)
x = base_model.output
</code></pre>
<p>After instantiating the pre-trained ResNet-50 model, we can start adding new layers to the architecture. Let&apos;s start with a pooling layer to normalize the tensor shape, then add a fully-connected layer of our own. Finally, we&apos;ll use a sigmoid unit for class probability since the task is binary (cat or dog).</p>
<pre><code class="language-python">x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation=&apos;relu&apos;)(x)
preds = Dense(1, activation=&apos;sigmoid&apos;)(x)
</code></pre>
<p>Before finishing the model definition and compiling, there&apos;s one more notable step. We need to prevent the &quot;base&quot; layers of the model from participating in the weight update phase of training while we &quot;break in&quot; the new layers we just added. Since each layer in a Keras model has a &quot;trainable&quot; property, we can just set it to false for all layers in the base architecture.</p>
<p>(Aside: There is apparently some funkiness to using this approach in models that have batch norm layers that can lead to sub-optimal results, especially when doing fine-tuning which we&apos;ll get to in a few steps. I haven&apos;t seen a conclusive answer on how to deal with this, and the niave approach seems to work okay for this problem, so I&apos;m not doing anything special to deal with it here but I wanted to point it out as a potential issue one might run into. There&apos;s a lengthly discussion on the subject <a href="https://github.com/keras-team/keras/pull/9965?ref=johnwittenauer.net">here</a>).</p>
<pre><code class="language-python">model = Model(inputs=base_model.input, outputs=preds)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer=RMSprop(lr=0.001), loss=&apos;binary_crossentropy&apos;, metrics=[&apos;accuracy&apos;])
</code></pre>
<p>Training should be pretty familiar, the only wrinkle here is we need to specify the number of batches in an epoch when using the &quot;flow_from_directory&quot; generator.</p>
<pre><code class="language-python">history = model.fit_generator(train_generator,
    train_generator.n // batch_size, epochs=3, workers=4,
    validation_data=val_generator,
    validation_steps=val_generator.n // batch_size)
</code></pre>
<pre>
Epoch 1/3
359/359 [==============================] - 128s 357ms/step - loss: 0.1738 - acc: 0.9506 - val_loss: 0.0694 - val_acc: 0.9839
Epoch 2/3
359/359 [==============================] - 123s 342ms/step - loss: 0.0809 - acc: 0.9729 - val_loss: 0.1059 - val_acc: 0.9778
Epoch 3/3
359/359 [==============================] - 123s 344ms/step - loss: 0.0717 - acc: 0.9755 - val_loss: 0.1411 - val_acc: 0.9723
</pre>
<p>These results aren&apos;t too bad even with the entire base architecture held constant. This is partly due to the fact that the training images are quite similar to the images that the architecture was trained on. If we were fitting the model on something totally different, say medical image classification for instance, transfer learning would still work but it wouldn&apos;t be this easy.</p>
<p>The next step is to fine-tune some of the base model by &quot;unfreezing&quot; parts of it and allowing them to update weights during training. I&apos;m not aware if there are any best practices for fine-tuning or not. I think it&apos;s generally a lot of trial and error. For this attempt, I unfroze the last residual block in the network and lowered the learning rate by an order of magnitude.</p>
<pre><code class="language-python">for layer in model.layers[:142]: layer.trainable = False
for layer in model.layers[142:]: layer.trainable = True
model.compile(optimizer=RMSprop(lr=0.0001), loss=&apos;binary_crossentropy&apos;, metrics=[&apos;accuracy&apos;])

history = model.fit_generator(train_generator,
    train_generator.n // batch_size, epochs=3, workers=4,
    validation_data=val_generator,
    validation_steps=val_generator.n // batch_size)
</code></pre>
<pre>
Epoch 1/3
359/359 [==============================] - 151s 421ms/step - loss: 0.0468 - acc: 0.9826 - val_loss: 1.0175 - val_acc: 0.9098
Epoch 2/3
359/359 [==============================] - 146s 406ms/step - loss: 0.0293 - acc: 0.9903 - val_loss: 0.1305 - val_acc: 0.9829
Epoch 3/3
359/359 [==============================] - 146s 406ms/step - loss: 0.0211 - acc: 0.9938 - val_loss: 0.1197 - val_acc: 0.9849
</pre>
<p>This technique is very powerful and is probably almost always a better idea than starting from scratch if there&apos;s a model out there that is at least somewhat similar to the thing you&apos;re trying to accomplish.  Currently transfer learning is mostly being applied to image models, although it&apos;s <a href="https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html?ref=johnwittenauer.net">quickly taking over language models</a> as well.</p>
<p>That wraps up this post on convolutional networks.  In the next post in this series we&apos;ll see how to use a deep learning framework like Keras to build a recommendation system.  Don&apos;t miss it!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deep Learning With Keras: Structured Time Series]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post marks the beginning of what I hope to become a series covering practical, real-world implementations using deep learning.  What sparked my motivation to do a series like this was Jeremy Howard&apos;s awesome <a href="http://www.fast.ai/?ref=johnwittenauer.net">fast.ai courses</a>, which show how to use deep learning to achieve world class</p>]]></description><link>https://www.johnwittenauer.net/deep-learning-with-keras-structured-time-series/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674f</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sun, 14 Oct 2018 18:00:41 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post marks the beginning of what I hope to become a series covering practical, real-world implementations using deep learning.  What sparked my motivation to do a series like this was Jeremy Howard&apos;s awesome <a href="http://www.fast.ai/?ref=johnwittenauer.net">fast.ai courses</a>, which show how to use deep learning to achieve world class performance from scratch in a number of different domains.  The one quibble I had with the class content was its use of a custom wrapper library, which I felt masked a lot of the difficulty early on by hiding the complexity in a function (or several layers of functions).  I understand that this approach was used deliberately as a teaching mechanism, and that&apos;s fine, but I wanted to see what these exercises looked like without having that crutch.  I also have a bit more experience with Keras than with PyTorch, and while both are great libraries, my preference at the moment is still Keras for most tasks.</p>
<p>My plan for each installment of the series is to take a topic from Jeremy&apos;s class and see if I can achieve similar results using nothing but Keras and other common libraries.  We won&apos;t get into the theory or math of neural networks much, there are lots of great (free) resources on the internet now to cover that aspect of it if that&apos;s what you&apos;re looking for.  Instead, this series will focus heavily on writing code.  I won&apos;t gloss over important pre-processing steps or assume tricky details are taken care of behind the scenes.  A lot of the challenge to using neural networks in practice resides in many of these often-neglected details, after all.  By the end of each post, we should have a sequence of steps that you can reliably run yourself (assuming dependencies are met and API versions are the same) to produce a result that&apos;s similar to what I show here.  With that, let&apos;s dive in!</p>
<p>Today we&apos;ll walk through an implementation of a deep learning model for structured time series data. We&#x2019;ll use the data from Kaggle&#x2019;s <a href="https://www.kaggle.com/c/rossmann-store-sales?ref=johnwittenauer.net">Rossmann Store Sales</a> competition. The steps outlined below are inspired by (and partially based on) <a href="https://github.com/fastai/fastai/blob/master/courses/dl1/lesson3-rossman.ipynb?ref=johnwittenauer.net">lesson 3</a> from Jeremy&apos;s course.</p>
<p>The focus here is on implementing a deep learning model for structured data. I&#x2019;ve skipped a bunch of pre-processing steps that are specific to this particular data but don&#x2019;t reflect general principles about applying deep learning to tabular data. If you&#x2019;re interested, you&#x2019;ll find complete step-by-step instructions on creating the &#x201C;joined&#x201D; data in the notebook I linked to above.  I know I just got done saying that I won&apos;t skip pre-processing steps, but I&apos;m mainly talking about stuff we apply to the data to work with deep learning, not sourcing the data to begin with.</p>
<p>(As an aside, I used <a href="https://www.paperspace.com/?ref=johnwittenauer.net">Paperspace </a>to run everything in this post. If you&#x2019;re not familiar with it, Paperspace is a cloud service that lets you rent GPU instances much cheaper than AWS. It&#x2019;s a great way to get started if you don&#x2019;t have your own hardware.)</p>
<p>First we need to get a few imports out of the way. All of these should come standard with an Anaconda install. I&#x2019;m also specifying the path where I&#x2019;ve pre-saved the &#x201C;joined&#x201D; data that we&#x2019;ll use as a starting point.  If you&apos;re starting from scratch, this comes from running the first half of Jeremy&apos;s lesson 3 notebook.</p>
<pre><code class="language-python">%matplotlib inline
import datetime
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.preprocessing import LabelEncoder, StandardScaler

PATH = &apos;/home/paperspace/data/rossmann/&apos;
</code></pre>
<p>Read the data file into a pandas dataframe and take a peek at the data to see what we&#x2019;re working with.</p>
<pre><code class="language-python">data = pd.read_feather(f&apos;{PATH}joined&apos;)
data.shape
</code></pre>
<pre>
(844338, 93)
</pre>
<p>We can also take a look at the first couple rows in the data with this trick that transposes each row into a column (93 is the number of columns).</p>
<pre><code class="language-python">data.head().T.head(93)
</code></pre>
<p>The data consists of ~800,000 records with a variety of features used to predict sales at a given store on a given day. As mentioned before, we&#x2019;re skipping over details about where these features came from as it&#x2019;s not the focus of this notebook, but you can find more information through the links above. Next we&#x2019;ll define variables that group the features into continuous and categorical buckets. This is very important as neural networks (really anything other than tree models) do not natively handle categorical data well.</p>
<pre><code class="language-python">target = &apos;Sales&apos;
cat_vars = [&apos;Store&apos;, &apos;DayOfWeek&apos;, &apos;Year&apos;, &apos;Month&apos;, &apos;Day&apos;, &apos;StateHoliday&apos;,
            &apos;CompetitionMonthsOpen&apos;, &apos;Promo2Weeks&apos;, &apos;StoreType&apos;, &apos;Assortment&apos;,
            &apos;PromoInterval&apos;, &apos;CompetitionOpenSinceYear&apos;, &apos;Promo2SinceYear&apos;,
            &apos;State&apos;, &apos;Week&apos;, &apos;Events&apos;, &apos;Promo_fw&apos;, &apos;Promo_bw&apos;, &apos;StateHoliday_fw&apos;,
            &apos;StateHoliday_bw&apos;, &apos;SchoolHoliday_fw&apos;, &apos;SchoolHoliday_bw&apos;]
cont_vars = [&apos;CompetitionDistance&apos;, &apos;Max_TemperatureC&apos;, &apos;Mean_TemperatureC&apos;,
             &apos;Min_TemperatureC&apos;, &apos;Max_Humidity&apos;, &apos;Mean_Humidity&apos;, &apos;Min_Humidity&apos;,
             &apos;Max_Wind_SpeedKm_h&apos;, &apos;Mean_Wind_SpeedKm_h&apos;, &apos;CloudCover&apos;, &apos;trend&apos;,
             &apos;trend_DE&apos;, &apos;AfterStateHoliday&apos;, &apos;BeforeStateHoliday&apos;, &apos;Promo&apos;,
             &apos;SchoolHoliday&apos;]
</code></pre>
<p>Set some reasonable default values for missing information so our pre-processing steps won&#x2019;t fail.</p>
<pre><code class="language-python">data = data.set_index(&apos;Date&apos;)
data[cat_vars] = data[cat_vars].fillna(value=&apos;&apos;)
data[cont_vars] = data[cont_vars].fillna(value=0)
</code></pre>
<p>Now we can do something with the categorical variables. The simplest first step is to use scikit-learn&#x2019;s LabelEncoder class to transform the raw category values (many of which are plain text) into unique integers, where each integer maps to a distinct value in that category. The code block below saves the fitted encoders (we&#x2019;ll need them later) and prints out the unique labels that each encoder found.</p>
<pre><code class="language-python">encoders = {}
for v in cat_vars:
    le = LabelEncoder()
    le.fit(data[v].values)
    encoders[v] = le
    data.loc[:, v] = le.transform(data[v].values)
    print(&apos;{0}: {1}&apos;.format(v, le.classes_))
</code></pre>
<pre>
Store: [   1    2    3 ... 1113 1114 1115]
DayOfWeek: [1 2 3 4 5 6 7]
Year: [2013 2014 2015]
Month: [ 1  2  3  4  5  6  7  8  9 10 11 12]
Day: [ 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
 25 26 27 28 29 30 31]
StateHoliday: [False  True]
CompetitionMonthsOpen: [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 24]
Promo2Weeks: [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 24 25]
StoreType: [&apos;a&apos; &apos;b&apos; &apos;c&apos; &apos;d&apos;]
Assortment: [&apos;a&apos; &apos;b&apos; &apos;c&apos;]
PromoInterval: [&apos;&apos; &apos;Feb,May,Aug,Nov&apos; &apos;Jan,Apr,Jul,Oct&apos; &apos;Mar,Jun,Sept,Dec&apos;]
CompetitionOpenSinceYear: [1900 1961 1990 1994 1995 1998 1999 2000 2001 2002 2003 2004 2005 2006
 2007 2008 2009 2010 2011 2012 2013 2014 2015]
Promo2SinceYear: [1900 2009 2010 2011 2012 2013 2014 2015]
State: [&apos;BE&apos; &apos;BW&apos; &apos;BY&apos; &apos;HB,NI&apos; &apos;HE&apos; &apos;HH&apos; &apos;NW&apos; &apos;RP&apos; &apos;SH&apos; &apos;SN&apos; &apos;ST&apos; &apos;TH&apos;]
Week: [ 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
 49 50 51 52]
Events: [&apos;&apos; &apos;Fog&apos; &apos;Fog-Rain&apos; &apos;Fog-Rain-Hail&apos; &apos;Fog-Rain-Hail-Thunderstorm&apos;
 &apos;Fog-Rain-Snow&apos; &apos;Fog-Rain-Snow-Hail&apos; &apos;Fog-Rain-Thunderstorm&apos; &apos;Fog-Snow&apos;
 &apos;Fog-Snow-Hail&apos; &apos;Fog-Thunderstorm&apos; &apos;Rain&apos; &apos;Rain-Hail&apos;
 &apos;Rain-Hail-Thunderstorm&apos; &apos;Rain-Snow&apos; &apos;Rain-Snow-Hail&apos;
 &apos;Rain-Snow-Hail-Thunderstorm&apos; &apos;Rain-Snow-Thunderstorm&apos;
 &apos;Rain-Thunderstorm&apos; &apos;Snow&apos; &apos;Snow-Hail&apos; &apos;Thunderstorm&apos;]
Promo_fw: [0. 1. 2. 3. 4. 5.]
Promo_bw: [0. 1. 2. 3. 4. 5.]
StateHoliday_fw: [0. 1. 2.]
StateHoliday_bw: [0. 1. 2.]
SchoolHoliday_fw: [0. 1. 2. 3. 4. 5. 6. 7.]
SchoolHoliday_bw: [0. 1. 2. 3. 4. 5. 6. 7.]
</pre>
<p>Split the data set into training and validation sets. To preserve the temporal nature of the data and make sure that we don&#x2019;t have any information leaks, we&#x2019;ll just take everything past a certain date and use that as our validation set.</p>
<pre><code class="language-python">train = data[data.index &lt; datetime.datetime(2015, 7, 1)]
val = data[data.index &gt;= datetime.datetime(2015, 7, 1)]

X = train[cat_vars + cont_vars].copy()
X_val = val[cat_vars + cont_vars].copy()
y = train[target].copy()
y_val = val[target].copy()
</code></pre>
<p>Next we can apply scaling to our continuous variables. We can once again leverage scikit-learn and use the StandardScaler class for this. The proper way to apply scaling is to &#x201C;fit&#x201D; the scaler on the training data and then apply the same transformation to both the training and validation data (this is why we had to split the data set in the last step).</p>
<pre><code class="language-python">scaler = StandardScaler()
X.loc[:, cont_vars] = scaler.fit_transform(X[cont_vars].values)
X_val.loc[:, cont_vars] = scaler.transform(X_val[cont_vars].values)
</code></pre>
<p>Normalize the data types that each variable is stored as. This is not strictly necessary but helps save storage space (and potentially processing time, although I&#x2019;m less sure about that).</p>
<pre><code class="language-python">for v in cat_vars:
    X[v] = X[v].astype(&apos;int&apos;).astype(&apos;category&apos;).cat.as_ordered()
    X_val[v] = X_val[v].astype(&apos;int&apos;).astype(&apos;category&apos;).cat.as_ordered()
for v in cont_vars:
    X[v] = X[v].astype(&apos;float32&apos;)
    X_val[v] = X_val[v].astype(&apos;float32&apos;)
</code></pre>
<p>Let&#x2019;s take a look at where we&#x2019;re at. The data should basically be ready to move into the modeling phase.</p>
<pre><code class="language-python">X.shape, X_val.shape, y.shape, y_val.shape
</code></pre>
<pre>
((814150, 38), (30188, 38), (814150,), (30188,))
</pre>
<pre><code class="language-python">X.dtypes
</code></pre>
<pre>
Store                       category
DayOfWeek                   category
Year                        category
Month                       category
Day                         category
StateHoliday                category
CompetitionMonthsOpen       category
Promo2Weeks                 category
StoreType                   category
Assortment                  category
PromoInterval               category
CompetitionOpenSinceYear    category
Promo2SinceYear             category
State                       category
Week                        category
Events                      category
Promo_fw                    category
Promo_bw                    category
StateHoliday_fw             category
StateHoliday_bw             category
SchoolHoliday_fw            category
SchoolHoliday_bw            category
CompetitionDistance          float32
Max_TemperatureC             float32
Mean_TemperatureC            float32
Min_TemperatureC             float32
Max_Humidity                 float32
Mean_Humidity                float32
Min_Humidity                 float32
Max_Wind_SpeedKm_h           float32
Mean_Wind_SpeedKm_h          float32
CloudCover                   float32
trend                        float32
trend_DE                     float32
AfterStateHoliday            float32
BeforeStateHoliday           float32
Promo                        float32
SchoolHoliday                float32
</pre>
<p>We now basically have two options when it comes to handling of categorical variables. The first option, which is the &#x201C;traditional&#x201D; way of handling categories, is to do a one-hot encoding for each category. This approach would create a binary variable for each unique value in each category, with the value being a 1 for the &#x201C;correct&#x201D; category and 0 for everything else. One-hot encoding works fairly well and is quite easy to do (there&#x2019;s even a scikit-learn class for it), however it&#x2019;s not perfect. It&#x2019;s particularly challenging with high-cardinality variables because it creates a very large, very sparse array that&#x2019;s hard to learn from.</p>
<p>Fortunately there&#x2019;s a better way, which is something called entity embeddings or category embeddings (I don&#x2019;t think there&#x2019;s a standard name for this yet). Jeremy covers it extensively in the class (also <a href="https://towardsdatascience.com/deep-learning-structured-data-8d6a278f3088?ref=johnwittenauer.net">this blog post</a> explains it very well). The basic idea is to create a distributed representation of the category using a vector of continuous numbers, where the length of the vector is lower than the cardinality of the category. The key insight is that this vector is learned by the network. It&#x2019;s part of the optimization graph. This allows the network to model complex, non-linear interactions between categories and other features in your input. It&#x2019;s quite useful, and as we&#x2019;ll see at the end, these embeddings can be used in interesting ways outside of the neural network itself.</p>
<p>In order to build a model using embeddings, we need to do some more prep work on our categories. First, let&#x2019;s create a list of category names along with their cardinality.</p>
<pre><code class="language-python">cat_sizes = [(c, len(X[c].cat.categories)) for c in cat_vars]
cat_sizes
</code></pre>
<pre>
[(&apos;Store&apos;, 1115),
 (&apos;DayOfWeek&apos;, 7),
 (&apos;Year&apos;, 3),
 (&apos;Month&apos;, 12),
 (&apos;Day&apos;, 31),
 (&apos;StateHoliday&apos;, 2),
 (&apos;CompetitionMonthsOpen&apos;, 25),
 (&apos;Promo2Weeks&apos;, 26),
 (&apos;StoreType&apos;, 4),
 (&apos;Assortment&apos;, 3),
 (&apos;PromoInterval&apos;, 4),
 (&apos;CompetitionOpenSinceYear&apos;, 23),
 (&apos;Promo2SinceYear&apos;, 8),
 (&apos;State&apos;, 12),
 (&apos;Week&apos;, 52),
 (&apos;Events&apos;, 22),
 (&apos;Promo_fw&apos;, 6),
 (&apos;Promo_bw&apos;, 6),
 (&apos;StateHoliday_fw&apos;, 3),
 (&apos;StateHoliday_bw&apos;, 3),
 (&apos;SchoolHoliday_fw&apos;, 8),
 (&apos;SchoolHoliday_bw&apos;, 8)]
</pre>
<p>Now we need to decide on the length of each embedding vector. Jeremy proposed using a simple formula: cardinality / 2, with a max of 50.</p>
<pre><code class="language-python">embedding_sizes = [(c, min(50, (c + 1) // 2)) for _, c in cat_sizes]
embedding_sizes
</code></pre>
<pre>
[(1115, 50),
 (7, 4),
 (3, 2),
 (12, 6),
 (31, 16),
 (2, 1),
 (25, 13),
 (26, 13),
 (4, 2),
 (3, 2),
 (4, 2),
 (23, 12),
 (8, 4),
 (12, 6),
 (52, 26),
 (22, 11),
 (6, 3),
 (6, 3),
 (3, 2),
 (3, 2),
 (8, 4),
 (8, 4)]
</pre>
<p>One last pre-processing step. Keras requires that each &#x201C;input&#x201D; into the model be fed in as a separate array, and since each embedding has its own input, we need to do some transformations to get the data in the right format.</p>
<pre><code class="language-python">X_array = []
X_val_array = []

for i, v in enumerate(cat_vars):
    X_array.append(X.iloc[:, i])
    X_val_array.append(X_val.iloc[:, i])

X_array.append(X.iloc[:, len(cat_vars):])
X_val_array.append(X_val.iloc[:, len(cat_vars):])

len(X_array), len(X_val_array)
</code></pre>
<pre>
(23, 23)
</pre>
<p>Okay! We&#x2019;re finally ready to get to the modeling part. Let&#x2019;s get some imports out of the way. I&#x2019;ve also defined a custom metric to calculate root mean squared percentage error, which was originally used by the Kaggle competition to score this data set.</p>
<pre><code class="language-python">from keras import backend as K
from keras import regularizers
from keras.models import Sequential
from keras.models import Model
from keras.layers import Activation, BatchNormalization, Concatenate
from keras.layers import Dropout, Dense, Input, Reshape
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau

def rmspe(y_true, y_pred):
    pct_var = (y_true - y_pred) / y_true
    return K.sqrt(K.mean(K.square(pct_var)))
</code></pre>
<p>Now for the model itself. I tried to make this as similar to Jeremy&#x2019;s model as I could, although there are some slight differences. The &#x201C;for&#x201D; section at the top shows how to add embeddings. They then get concatenated together and we apply dropout to the unified embedding layer. Next we concatenate the output of that layer with our continuous inputs and feed the whole thing into a dense layer. From here on it&#x2019;s pretty standard stuff. The only notable design choice is I omitted batch normalization because it seemed to hurt performance no matter what I did. I also increased dropout a bit from what Jeremy had in his PyTorch architecture for this data. Finally, note the inclusion of the &#x201C;rmspe&#x201D; function as a metric during the compile step (this will show up later during training).</p>
<pre><code class="language-python">def EmbeddingNet(cat_vars, cont_vars, embedding_sizes):
    inputs = []
    embed_layers = []
    for (c, (in_size, out_size)) in zip(cat_vars, embedding_sizes):
        i = Input(shape=(1,))
        o = Embedding(in_size, out_size, name=c)(i)
        o = Reshape(target_shape=(out_size,))(o)
        inputs.append(i)
        embed_layers.append(o)

    embed = Concatenate()(embed_layers)
    embed = Dropout(0.04)(embed)

    cont_input = Input(shape=(len(cont_vars),))
    inputs.append(cont_input)

    x = Concatenate()([embed, cont_input])

    x = Dense(1000, kernel_initializer=&apos;he_normal&apos;)(x)
    x = Activation(&apos;relu&apos;)(x)
    x = Dropout(0.1)(x)

    x = Dense(500, kernel_initializer=&apos;he_normal&apos;)(x)
    x = Activation(&apos;relu&apos;)(x)
    x = Dropout(0.1)(x)

    x = Dense(1, kernel_initializer=&apos;he_normal&apos;)(x)
    x = Activation(&apos;linear&apos;)(x)

    model = Model(inputs=inputs, outputs=x)
    opt = Adam(lr=0.001)
    model.compile(loss=&apos;mean_absolute_error&apos;, optimizer=opt, metrics=[rmspe])

    return model
</code></pre>
<p>One of the cool tricks Jeremy introduced in the class was the concept of a learning rate finder. The idea is to start with a very small learning rate and slowly increase it throughout the epoch, and monitor the loss along the way. It should end up as a curve that gives a good indication of where to set the learning rate for training. To accomplish this with Keras, I found a script on Github that implements learning rate cycling and includes a class that&#x2019;s supposed to mimic Jeremy&#x2019;s LR finder. We can just download a copy to the local directory.</p>
<pre><code class="language-python">!wget &quot;https://raw.githubusercontent.com/titu1994/keras-one-cycle/master/clr.py&quot;
</code></pre>
<pre>
--2018-10-04 20:20:17--  https://raw.githubusercontent.com/titu1994/keras-one-cycle/master/clr.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.200.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.200.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22310 (22K) [text/plain]
Saving to: &#x2018;clr.py&#x2019;

clr.py              100%[===================&gt;]  21.79K  --.-KB/s    in 0.009s  

2018-10-04 20:20:17 (2.35 MB/s) - &#x2018;clr.py&#x2019; saved [22310/22310]
</pre>
<p>Let&#x2019;s set up and train the model for one epoch using the LRFinder class as a callback. It will slowly but exponentially increase the learning rate each batch and track the loss so we can plot the results.</p>
<pre><code class="language-python">lr_finder = LRFinder(num_samples=X.shape[0], batch_size=128, minimum_lr=1e-5,
                     maximum_lr=10, lr_scale=&apos;exp&apos;, loss_smoothing_beta=0.995,
                     verbose=False)
model = EmbeddingNet(cat_vars, cont_vars, embedding_sizes)
history = model.fit(x=X_array, y=y, batch_size=128, epochs=1, verbose=1,
                    callbacks=[lr_finder], validation_data=(X_val_array, y_val),
                    shuffle=False)
</code></pre>
<pre>
Train on 814150 samples, validate on 30188 samples
Epoch 1/1
814150/814150 [==============================] - 73s 90us/step - loss: 2521.7429 - rmspe: 0.4402 - val_loss: 3441.1762 - val_rmspe: 0.5088
</pre>
<pre><code class="language-python">lr_finder.plot_schedule(clip_beginning=20)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-1.png" alt loading="lazy"></p>
<p>It doesn&#x2019;t look as good as the plot Jeremy used in the class. The PyTorch version seemed to make it much more apparent where the loss started to level off. I haven&#x2019;t dug into this too closely but I&#x2019;m guessing there are some &quot;tricks&quot; in that version that we aren&apos;t using. If I had to eyeball this I&#x2019;d say it&#x2019;s recommending 1e-4 for the learning rate, but Jeremy used 1e-3 so we&#x2019;ll go with that instead.</p>
<p>We&#x2019;re now ready to train the model. I&#x2019;ve included two callbacks (both built into Keras) to demonstrate how they work. The first one automatically reduces the learning rate as we progress through training if the validation error stops improving. The second one will save a copy of the model weights to a file every time we reach a new low in validation error.</p>
<pre><code class="language-python">model = EmbeddingNet(cat_vars, cont_vars, embedding_sizes)
lr_reducer = ReduceLROnPlateau(monitor=&apos;val_loss&apos;, factor=0.2, patience=3,
                               verbose=1, mode=&apos;auto&apos;, min_delta=10, cooldown=0,
                               min_lr=0.0001)
checkpoint = ModelCheckpoint(&apos;best_model_weights.hdf5&apos;, monitor=&apos;val_loss&apos;,
                             save_best_only=True)
history = model.fit(x=X_array, y=y, batch_size=128, epochs=20, verbose=1,
                    callbacks=[lr_reducer, checkpoint],
                    validation_data=(X_val_array, y_val), shuffle=False)
</code></pre>
<pre>
Train on 814150 samples, validate on 30188 samples
Epoch 1/20
814150/814150 [==============================] - 68s 83us/step - loss: 1138.6056 - rmspe: 0.2421 - val_loss: 1923.3162 - val_rmspe: 0.3177
Epoch 2/20
814150/814150 [==============================] - 66s 81us/step - loss: 962.1155 - rmspe: 0.2140 - val_loss: 1895.0041 - val_rmspe: 0.3015
Epoch 3/20
814150/814150 [==============================] - 66s 80us/step - loss: 850.5718 - rmspe: 0.1899 - val_loss: 1551.5644 - val_rmspe: 0.2554
Epoch 4/20
814150/814150 [==============================] - 66s 81us/step - loss: 760.7246 - rmspe: 0.1607 - val_loss: 1589.6841 - val_rmspe: 0.2556
Epoch 5/20
814150/814150 [==============================] - 66s 81us/step - loss: 723.1884 - rmspe: 0.1522 - val_loss: 2032.6661 - val_rmspe: 0.3093
Epoch 6/20
814150/814150 [==============================] - 66s 81us/step - loss: 701.6135 - rmspe: 0.1470 - val_loss: 1559.3813 - val_rmspe: 0.2455

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
Epoch 7/20
814150/814150 [==============================] - 66s 81us/step - loss: 759.7100 - rmspe: 0.1551 - val_loss: 1363.9912 - val_rmspe: 0.2134
Epoch 8/20
814150/814150 [==============================] - 66s 81us/step - loss: 687.3188 - rmspe: 0.1445 - val_loss: 1238.6456 - val_rmspe: 0.1987
Epoch 9/20
814150/814150 [==============================] - 66s 82us/step - loss: 664.9696 - rmspe: 0.1411 - val_loss: 1156.7629 - val_rmspe: 0.1894
Epoch 10/20
814150/814150 [==============================] - 66s 81us/step - loss: 648.3002 - rmspe: 0.1383 - val_loss: 1085.9985 - val_rmspe: 0.1804
Epoch 11/20
814150/814150 [==============================] - 66s 81us/step - loss: 634.7324 - rmspe: 0.1358 - val_loss: 1046.5626 - val_rmspe: 0.1764
Epoch 12/20
814150/814150 [==============================] - 66s 81us/step - loss: 620.5305 - rmspe: 0.1331 - val_loss: 998.0284 - val_rmspe: 0.1702
Epoch 13/20
814150/814150 [==============================] - 66s 80us/step - loss: 608.7635 - rmspe: 0.1308 - val_loss: 972.2079 - val_rmspe: 0.1672
Epoch 14/20
814150/814150 [==============================] - 66s 81us/step - loss: 596.7082 - rmspe: 0.1287 - val_loss: 944.8604 - val_rmspe: 0.1627
Epoch 15/20
814150/814150 [==============================] - 66s 81us/step - loss: 585.2907 - rmspe: 0.1265 - val_loss: 902.0995 - val_rmspe: 0.1568
Epoch 16/20
814150/814150 [==============================] - 66s 81us/step - loss: 575.5892 - rmspe: 0.1246 - val_loss: 854.3993 - val_rmspe: 0.1492
Epoch 17/20
814150/814150 [==============================] - 66s 81us/step - loss: 566.3440 - rmspe: 0.1228 - val_loss: 817.1876 - val_rmspe: 0.1438
Epoch 18/20
814150/814150 [==============================] - 66s 81us/step - loss: 558.5853 - rmspe: 0.1214 - val_loss: 767.2299 - val_rmspe: 0.1369
Epoch 19/20
814150/814150 [==============================] - 66s 81us/step - loss: 550.4629 - rmspe: 0.1200 - val_loss: 730.3196 - val_rmspe: 0.1317
Epoch 20/20
814150/814150 [==============================] - 66s 81us/step - loss: 542.9558 - rmspe: 0.1188 - val_loss: 698.6143 - val_rmspe: 0.1278
</pre>
<p>By the end it&#x2019;s doing pretty good, and it looks like the model is still improving. We can quickly get a snapshot of its performance using the &#x201C;history&#x201D; object that Keras&#x2019;s &quot;fit&quot; method returns.</p>
<pre><code class="language-python">loss_history = history.history[&apos;loss&apos;]
val_loss_history = history.history[&apos;val_loss&apos;]
min_val_epoch = val_loss_history.index(min(val_loss_history)) + 1

print(&apos;min training loss = {0}&apos;.format(min(loss_history)))
print(&apos;min val loss = {0}&apos;.format(min(val_loss_history)))
print(&apos;min val epoch = {0}&apos;.format(min_val_epoch))
</code></pre>
<pre>
min training loss = 542.9558401937004
min val loss = 698.6142525395542
min val epoch = 20
</pre>
<p>I also like to make plots to visually see what&#x2019;s going on. Let&#x2019;s create a function that plots the training and validation loss history.</p>
<pre><code class="language-python">from jupyterthemes import jtplot
jtplot.style()

def plot_loss_history(history, n_epochs):
    fig, ax = plt.subplots(figsize=(8, 8 * 3 / 4))
    ax.plot(list(range(n_epochs)), history.history[&apos;loss&apos;], label=&apos;Training Loss&apos;)
    ax.plot(list(range(n_epochs)), history.history[&apos;val_loss&apos;], label=&apos;Validation Loss&apos;)
    ax.set_xlabel(&apos;Epoch&apos;)
    ax.set_ylabel(&apos;Loss&apos;)
    ax.legend(loc=&apos;upper right&apos;)
    fig.tight_layout()

plot_loss_history(history, 20)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-2.png" alt loading="lazy"></p>
<p>The validation loss was pretty unstable early on but was really starting to converge toward the end of training. We can do something similar for the learning rate history.</p>
<pre><code class="language-python">def plot_learning_rate(history):
    fig, ax = plt.subplots(figsize=(8, 8 * 3 / 4))
    ax.set_xlabel(&apos;Training Iterations&apos;)
    ax.set_ylabel(&apos;Learning Rate&apos;)
    ax.plot(history.history[&apos;lr&apos;])
    fig.tight_layout()

plot_learning_rate(history)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-3.png" alt loading="lazy"></p>
<p>One other innovation Jeremy introduced in the class is the idea of using learning rate cycles to help prevent the model from settling in a bad local minimum. This is based on research by Leslie Smith that showed using this type of learning rate policy can lead to quicker convergence and better accuracy (this is also where the learning rate finder idea came from). Fortunately the file we downloaded earlier includes support for cyclical learning rates in Keras, so we can try this out ourselves. The policy Jeremy is currently recommending is called a &#x201C;one-cycle&#x201D; policy so that&#x2019;s what we&#x2019;ll try.</p>
<p>(As an aside, Jeremy <a href="http://www.fast.ai/2018/04/30/dawnbench-fastai/?ref=johnwittenauer.net">wrote a blog post</a> about this if you&apos;d like to dig into its origins a bit more. His results applying it to ImageNet were quite impressive.)</p>
<pre><code class="language-python">model2 = EmbeddingNet(cat_vars, cont_vars, embedding_sizes)
batch_size = 128
n_epochs = 10
lr_manager = OneCycleLR(num_samples=X.shape[0] + batch_size, num_epochs=n_epochs,
                        batch_size=batch_size, max_lr=0.01, end_percentage=0.1,
                        scale_percentage=None, maximum_momentum=None,
                        minimum_momentum=None, verbose=False)
history = model2.fit(x=X_array, y=y, batch_size=batch_size, epochs=n_epochs,
                     verbose=1, callbacks=[checkpoint, lr_manager],
                     validation_data=(X_val_array, y_val), shuffle=False)
</code></pre>
<pre>
Train on 814150 samples, validate on 30188 samples
Epoch 1/10
814150/814150 [==============================] - 76s 93us/step - loss: 1115.8234 - rmspe: 0.2384 - val_loss: 1625.4826 - val_rmspe: 0.2847
Epoch 2/10
814150/814150 [==============================] - 74s 90us/step - loss: 853.5083 - rmspe: 0.1828 - val_loss: 1308.4618 - val_rmspe: 0.2416
Epoch 3/10
814150/814150 [==============================] - 73s 90us/step - loss: 800.1833 - rmspe: 0.1622 - val_loss: 1379.4527 - val_rmspe: 0.2425
Epoch 4/10
814150/814150 [==============================] - 74s 91us/step - loss: 820.6853 - rmspe: 0.1627 - val_loss: 1353.2198 - val_rmspe: 0.2386
Epoch 5/10
814150/814150 [==============================] - 73s 90us/step - loss: 823.7708 - rmspe: 0.1641 - val_loss: 1423.9368 - val_rmspe: 0.2440
Epoch 6/10
814150/814150 [==============================] - 74s 90us/step - loss: 778.9107 - rmspe: 0.1548 - val_loss: 1425.7734 - val_rmspe: 0.2449
Epoch 7/10
814150/814150 [==============================] - 73s 90us/step - loss: 760.5194 - rmspe: 0.1508 - val_loss: 1324.7112 - val_rmspe: 0.2273
Epoch 8/10
814150/814150 [==============================] - 74s 91us/step - loss: 734.5933 - rmspe: 0.1464 - val_loss: 1449.1921 - val_rmspe: 0.2401
Epoch 9/10
814150/814150 [==============================] - 74s 91us/step - loss: 750.8221 - rmspe: 0.1491 - val_loss: 2127.6987 - val_rmspe: 0.3179
Epoch 10/10
814150/814150 [==============================] - 74s 91us/step - loss: 750.6736 - rmspe: 0.1500 - val_loss: 1375.3424 - val_rmspe: 0.2121
</pre>
<p>As you can probably tell from the model error, I didn&#x2019;t have a lot of success with this strategy. I tried a few different configurations and nothing really worked, but I wouldn&#x2019;t say it&#x2019;s an indictment of the technique so much as it just didn&#x2019;t happen to do well within the narrow scope that I attempted to apply it. Nevertheless, I&#x2019;m definitely adding it to my toolbox for future reference.</p>
<p>If my earlier description wasn&#x2019;t clear, this is how the learning is supposed to evolve over time. It forms a triangle from the starting point, coming back to the original learning rate towards the end and then decaying further as training wraps up.</p>
<pre><code class="language-python">plot_learning_rate(lr_manager)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-4.png" alt loading="lazy"></p>
<p>One last trick worth discussing is what we can do with the embeddings that the network learned. Similar to word embeddings, these vectors contain potentially interesting information about how the values in each category relate to each other. One really simple way to see this visually is to do a PCA transform on the learned embedding weights and plot the first two dimensions. Let&#x2019;s create a function to do just that.</p>
<pre><code class="language-python">def plot_embedding(model, encoders, category):
    embedding_layer = model.get_layer(category)
    weights = embedding_layer.get_weights()[0]
    pca = PCA(n_components=2)
    weights = pca.fit_transform(weights)
    weights_t = weights.T
    fig, ax = plt.subplots(figsize=(8, 8 * 3 / 4))
    ax.scatter(weights_t[0], weights_t[1])
    for i, day in enumerate(encoders[category].classes_):
        ax.annotate(day, (weights_t[0, i], weights_t[1, i]))
        fig.tight_layout()
</code></pre>
<p>We can now plot any categorical variable in the model and get a sense of which categories are more or less similar to each other. For instance, if we examine &quot;day of week&quot;, it seems to have picked up that Sunday (7 on the chart) is quite different than every other day for store sales. And if we look at &quot;state&quot; (this data is for a German company BTW) there&#x2019;s probably some regional similarity to the cluster in the bottom left. It&#x2019;s a really cool technique that potentially has a wide range of uses.</p>
<pre><code class="language-python">plot_embedding(model, encoders, &apos;DayOfWeek&apos;)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-5.png" alt loading="lazy"></p>
<pre><code class="language-python">plot_embedding(model, encoders, &apos;State&apos;)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/10/keras1-6.png" alt loading="lazy"></p>
<p>That pretty much wraps things up!  Using deep learning to model structured data is really under-discussed but seems likely to have huge potential since so many data scientists spend so much of their time working with data that looks like this.  The use of embeddings  in particular feels like a bit of a game-changer on high-cardinality categories.  Hopefully breaking it down step by step will help a few of you out there figure out how to adapt deep learning to your problem domain.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The Lean Startup]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Eric Ries&apos;s book &quot;The Lean Startup&quot;.  This book&apos;s message has a wider applicability than one might think, because contrary to what the title suggests, its methodology applies to more than just startups.  There&apos;s a telling phase that Eric</p>]]></description><link>https://www.johnwittenauer.net/the-lean-startup/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674e</guid><category><![CDATA[Book Review]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Tue, 10 Jul 2018 00:20:20 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Eric Ries&apos;s book &quot;The Lean Startup&quot;.  This book&apos;s message has a wider applicability than one might think, because contrary to what the title suggests, its methodology applies to more than just startups.  There&apos;s a telling phase that Eric uses to define a startup, which is &#x201C;a human institution designed to create a new product or service under conditions of extreme uncertainty&#x201D;.   I love this definition because it says nothing about what the institution looks like or how big it is.  In fact, lean startup principles very often get applied to small &quot;innovation teams&quot; within large, otherwise slow-moving and highly bureaucratic companies.  Reis talks about this in the look, stating that there are basically 3 things needed for innovation to thrive - scarce but secure resources, independent authority to develop the business, and a personal stake in the outcome.  Teams with these characteristics can deliver surprisingly powerful results.</p>
<p>There are a couple key ideas to the lean startup approach that are worth discussing.  The first is a concept Reis calls &quot;validated learning&quot;.  I think of validated learning as applying the scientific method to building a business.  The idea is to systematically test your assumptions by treating everything as an experiment.  Specifically, you should set up experiments for every assumption or decision in such a way that there is a non-ambiguous measurable component (defined a priori) that tells you if you were right or not.  Every product, feature, marketing campaign etc. becomes an experiment.  Done correctly, these experiments result in empirical demonstrations that valuable truths about the business&#x2019;s prospects have been discovered.</p>
<p>This idea seems to mesh pretty well with what Nassim Taleb calls &quot;tinkering&quot;, which is experimentation without necessarily having a clear thesis about what you hope to find.  Tinkering is an asymmetric process because the upside of discovering something can be many orders of magnitude greater than the cost of doing the experiment.  In Reis&apos;s model the experimentation is more strongly guided by prior assumptions, but the asymmetry still holds.  I also think adopting this philosophy would lead to making predictions that are more easily testable (almost out of necessity), which is a good way to calibrate prior beliefs on a wide range of topics.</p>
<p>A related thread in the book is the idea of only doing work that&apos;s necessary to achieve validated learning.  Anything that doesn&apos;t contribute to learning is a form of waste, so just do the stuff that&apos;s absolutely necessary to test your assumptions.  I imagine this is harder to put into practice than it sounds.  It&apos;s not at all trivial to identify what&apos;s really adding value when you&apos;re trying to make decisions day-to-day about what to focus on.  There&apos;s probably a lot of gray areas and I doubt anyone actually, literally tracks 100% of their time and effort according to this metric, but it seems like a good heuristic.</p>
<p>Another key idea from the book is the concept of value vs. growth hypotheses.  It goes like this - the two most important assumptions to make for a product are its value hypothesis (does it deliver value to customers once they&apos;re using it) and its growth hypothesis (how will new customers discover it).  Every assumption falls into one of these categories.  The value hypothesis must be proved before the growth hypothesis.  If you&apos;ve adopted the &quot;validated learning&quot; philosophy, this means that experiments testing the value hypothesis are essentially an early version of your product.</p>
<p>I like this distinction because it has a focusing effect when deciding what to work on.  There&apos;s no point worrying about how you&apos;re going to grow if you don&apos;t have a product that anyone wants to use.  This directly ties into two of the most famous (infamous?) concepts from the book, the minimum viable product (MVP) and &quot;pivoting&quot;.  An MVP is the smallest set of features that can be put together to test the value hypothesis.  It&apos;s a bare-bones version of the product you hope to build.  If the MVP doesn&apos;t work (i.e. if customers do not find the value hypothesis compelling) then it&apos;s time to pivot.  Pivoting is just testing a new value hypothesis.  It&apos;s adjusting your business assumptions in the face of new evidence gained from testing the MVP.</p>
<p>Both of these ideas have permeated popular culture (or at least <a href="https://www.imdb.com/title/tt3222784/?ref=johnwittenauer.net">startup culture</a>) to the point where they&apos;re way overused.  I think Reis&apos;s original intent with MVPs and pivoting makes a lot of sense, but there are some valid criticisms.  The biggest one is just how subjective all of it can be.  It&apos;s not like you get a simple binary pass/fail.  The initial product might be kind-of sort-of working, but not quite working well enough, but maybe it will with a few more tweaks or by adding features X and Y. It&apos;s very hard to know where you&apos;re really at, and no amount of measuring and experimentation can eliminate that ambiguity.  Still, as a framework for getting to a viable business model it&apos;s a very logical approach.</p>
<p>Overall, I thought this was a great read.  Reis&apos;s framework for product development turns out to be surprisingly flexible.  Even though the focus of the book is on building new businesses, I think the concepts are general enough that they can be applied to a much wider set of circumstances. In a sense, Reis is just broadening the definition of what it means to be innovative (a topic I&apos;ve <a href="https://www.johnwittenauer.net/the-innovators/">written about before</a>) and defining a strategy to consistently achieve innovative results.  The book covers a lot more ground than what I discussed here, but it&apos;s very accessible and easy to get through in a few days.  Highly recommend it.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A Sampling Of Monte Carlo Methods]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Learning data science is a process of exploration.  It involves continually expanding the surface area of concepts and techniques that you have at your disposal by learning new topics that build on or share a knowledge base with the topics you&apos;ve already mastered.  To visualize this, one can</p>]]></description><link>https://www.johnwittenauer.net/a-sampling-of-monte-carlo-methods/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674d</guid><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Mon, 16 Apr 2018 00:18:06 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Learning data science is a process of exploration.  It involves continually expanding the surface area of concepts and techniques that you have at your disposal by learning new topics that build on or share a knowledge base with the topics you&apos;ve already mastered.  To visualize this, one can imagine a vast network of interconnected nodes.  Some nodes sit toward the outside of the graph and have a lot of edges directed toward them - these are topics that require understanding lots of other related concepts before they can be learned.  Then there are nodes on the interior of the graph with lots and lots of connections that lead everywhere.  These are the really foundational concepts that open the doors to all sorts of new discoveries.</p>
<p>My own experience has been that some of these really important topics can slip through the cracks for a surprisingly long time, especially if you&apos;re mostly self-taught like I am.  Going back to basics can seem like a waste of time when all of the mainstream focus and attention is on sexy new stuff like self-driving cars, but I&apos;ve found that understanding the basics at a deep level really compounds your knowledge returns over time.</p>
<p>In a <a href="https://www.johnwittenauer.net/markov-chains-from-scratch/">recent post</a> I did an introduction to one such &quot;foundational topic&quot; called Markov Chains.  Today we&apos;ll explore a related but perhaps even more basic concept - Monte Carlo methods.</p>
<p>Monte Carlo methods are a class of techniques that use random sampling to simulate a draw from some distribution. By making repeated draws and calculating an aggregate on the distribution of those draws, it&apos;s possible to approximate a solution to a problem that may be very hard to calculate directly.</p>
<p>If that sounds overly esoteric, don&apos;t worry; we&apos;re going to step through some examples that will really help crystallize the above statement.  These examples are intentionally basic. They&apos;re designed to illustrate the core concept without getting lost in problem-specific details. Consider these a starting point for learning how to apply Monte Carlo more broadly.</p>
<p>One key point that&apos;s worth stating - Monte Carlo methods are an <strong>approach</strong>, not an algorithm. This was confusing to me at first. I kept looking for a &quot;Monte Carlo&quot; python library that implemented everything for me like scikit-learn does. There isn&apos;t one. It&apos;s a way of thinking about a problem, similar to dynamic programming. Each problem is different. There may be some patterns but they have to be learned over time. It isn&apos;t something that can be abstracted into a library.</p>
<p>The application of Monte Carlo methods tends to follow a pattern. There are four general steps, and you&apos;ll see below that the problems we tackle pretty much adhere to this formula.</p>
<ol>
<li>Create a model of the domain</li>
<li>Generate random draws from the distribution over the domain</li>
<li>Perform some deterministic calculation on the output</li>
<li>Aggregate the results</li>
</ol>
<p>This sequence informs us about the type of problems where the general application of Monte Carlo methods is useful. Specifically, when we have some <strong>generative model</strong> of a domain (i.e. something that we can use to generate data points from at will) and want to ask a question about that domain that isn&apos;t easily answered analytically, we can use Monte Carlo to get the answer instead.</p>
<p>To start off, let&apos;s tackle one of the simplest domains there is - rolling a pair of dice. This is very straightforward to implement.</p>
<pre><code class="language-python">%matplotlib inline
import random

def roll_die():
    return random.randint(1, 6)

def roll_dice():
    return roll_die() + roll_die()

print(roll_dice())
print(roll_dice())
print(roll_dice())
</code></pre>
<pre>
9
8
9
</pre>
<p>Think of the dice as a probability distribution. On any given roll, there&apos;s some likelihood of getting each possible number. Collectively, these probabilities represent the distribution for the dice-rolling domain. Now imagine you want to know what this distribution looks like, having only the knowledge that you have two dice and each one can roll a 1-6 with equal probability. How would you calculate this distribution analytically? It&apos;s not obvious, even for the simplest of domains. Fortunately there&apos;s an easy way to figure it out - just roll the dice over and over, and count how many times you get each combination!</p>
<pre><code class="language-python">import matplotlib.pyplot as plt

def roll_histogram(samples):
    rolls = []
    for _ in range(samples):
        rolls.append(roll_dice())

    fig, ax = plt.subplots(figsize=(12, 9))
    plt.hist(rolls, bins=11)

roll_histogram(100000)
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/04/mc-1.png" alt loading="lazy"></p>
<p>The histogram gives us a visual sense of the likelihood of each roll, but what if we want something more targeted? Say, for example, that we wanted to know the probability of rolling a 6 or higher? Again, consider how you would solve this with an equation. It&apos;s not easy, right? But with a few very simple lines of code we can write a function that makes this question trivial.</p>
<pre><code class="language-python">def prob_of_roll_greater_than_equal_to(x, n_samples):
    geq = 0
    for _ in range(n_samples):
        if roll_dice() &gt;= x:
            geq += 1

    probability = float(geq) / n_samples
    print(&apos;Probability of rolling greater than or equal to {0}: {1} ({2} samples)&apos;.format(x, probability, n_samples))
</code></pre>
<p>All we&apos;re doing is running a loop some number of times and rolling the dice, then recording if the result is greater than or equal to some number of interest. At the end we calculate the proportion of samples that matched our critera, and we have the probability we&apos;re interested in. Easy!</p>
<p>You might notice that there&apos;s a parameter for the number of samples to draw. This is one of the tricky parts of Monte Carlo. We&apos;re relying on the <a href="https://en.wikipedia.org/wiki/Law_of_large_numbers?ref=johnwittenauer.net">law of large numbers</a> to get an accurate result, but how large is large enough? In practice it seems you just have to tinker with the number of samples and see where the result begins to stabilize (think of it as a hyper-parameter that can be tuned).</p>
<p>To make this more concrete, let&apos;s try calculating the probability of a 6 or higher with varying numbers of samples.</p>
<pre><code class="language-python">prob_of_roll_greater_than_equal_to(6, n_samples=10)
prob_of_roll_greater_than_equal_to(6, n_samples=100)
prob_of_roll_greater_than_equal_to(6, n_samples=1000)
prob_of_roll_greater_than_equal_to(6, n_samples=10000)
prob_of_roll_greater_than_equal_to(6, n_samples=100000)
prob_of_roll_greater_than_equal_to(6, n_samples=1000000)
</code></pre>
<pre>
Probability of rolling greater than or equal to 6: 0.9 (10 samples)
Probability of rolling greater than or equal to 6: 0.68 (100 samples)
Probability of rolling greater than or equal to 6: 0.723 (1000 samples)
Probability of rolling greater than or equal to 6: 0.7217 (10000 samples)
Probability of rolling greater than or equal to 6: 0.72135 (100000 samples)
Probability of rolling greater than or equal to 6: 0.722335 (1000000 samples)
</pre>
<p>In this case 100 samples wasn&apos;t quite enough, but 1,000,000 was probably overkill. This is going to vary depending on the problem though.</p>
<p>Let&apos;s move on to something slightly more complicated - calculating the value of &#x1D70B;. If you&apos;re not aware, &#x1D70B; is the ratio of a circle&apos;s circumference to its diameter. In other words, if you &quot;unrolled&quot; a circle with a diameter of one you would get a line with a length of &#x1D70B;. There are analytical ways to derive the value of &#x1D70B;, but what if we didn&apos;t know that? What if all we knew was the definition above? Monte Carlo to the rescue!</p>
<p>To understand the function below, imagine a unit circle inscribed in a unit square. We know that the area of a unit circle is &#x1D70B;/4, so if we generate a bunch of points randomly in a unit square and record how many of them &quot;hit&quot; in the circle&apos;s area, the ratio of &quot;hits&quot; to &quot;misses&quot; should be equal to &#x1D70B;/4. We then multiply by 4 to get an approximation of &#x1D70B;. This works with a full circle as well as a quarter circle (which we&apos;ll use below).</p>
<pre><code class="language-python">import math

def estimate_pi(samples):
    hits = 0
    for _ in range(samples):
        x = random.random()
        y = random.random()

        if math.sqrt((x ** 2) + (y ** 2)) &lt; 1:
            hits += 1

    ratio = (float(hits) / samples) * 4
    print(&apos;Estimate with {0} samples: {1}&apos;.format(samples, ratio))
</code></pre>
<p>Let&apos;s try it out with varying numbers of samples and see what happens.</p>
<pre><code class="language-python">estimate_pi(samples=10)
estimate_pi(samples=100)
estimate_pi(samples=1000)
estimate_pi(samples=10000)
estimate_pi(samples=100000)
estimate_pi(samples=1000000)
</code></pre>
<pre>
Estimate with 10 samples: 3.2
Estimate with 100 samples: 3.12
Estimate with 1000 samples: 3.172
Estimate with 10000 samples: 3.1352
Estimate with 100000 samples: 3.14964
Estimate with 1000000 samples: 3.14116
</pre>
<p>We should observe that as we increase the number of samples, the result is converging on the value of &#x1D70B;. If the logic I described above for how we&apos;re getting this result isn&apos;t clear, a picture might help.</p>
<pre><code class="language-python">def plot_pi_estimate(samples):
    hits = 0
    x_inside = []
    y_inside = []
    x_outside = []
    y_outside = []

    for _ in range(samples):
        x = random.random()
        y = random.random()

        if math.sqrt((x ** 2) + (y ** 2)) &lt; 1:
            hits += 1
            x_inside.append(x)
            y_inside.append(y)
        else:
            x_outside.append(x)
            y_outside.append(y)

    fig, ax = plt.subplots(figsize=(12, 9))
    ax.set_aspect(&apos;equal&apos;)
    ax.scatter(x_inside, y_inside, s=20, c=&apos;b&apos;)
    ax.scatter(x_outside, y_outside, s=20, c=&apos;r&apos;)
    fig.show()

    ratio = (float(hits) / samples) * 4
    print(&apos;Estimate with {0} samples: {1}&apos;.format(samples, ratio))
</code></pre>
<p>This function will plot randomly-generated numbers with a color indicating if the point falls inside (blue) or outside (red) the area of the unit circle. Let&apos;s try it with a moderate number of samples first and see what it looks like.</p>
<pre><code class="language-python">plot_pi_estimate(samples=10000)
</code></pre>
<pre>
Estimate with 10000 samples: 3.1244
</pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/04/mc-2.png" alt loading="lazy"></p>
<p>We can more or less see the contours of the circle forming. It should look much clearer if we raise the sample count a bit.</p>
<pre><code class="language-python">plot_pi_estimate(samples=100000)
</code></pre>
<pre>
Estimate with 100000 samples: 3.1368
</pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/04/mc-3.png" alt loading="lazy"></p>
<p>Better! It&apos;s worth taking a moment to consider what we&apos;re doing here. After all, approximating &#x1D70B; (at least to a few decimal points) is a fairly trivial problem. What&apos;s interesting about this technique though is we didn&apos;t need to know anything other than basic geometry to get there. This concept generalizes to much harder problems where no other method of calculating an answer is known to exist (or where doing so would be computationally intractable). If sacrificing precision is an acceptable trade-off, then using Monte Carlo techniques as a general problem-solving framework in domains involving randomness and uncertainty makes a lot of sense.</p>
<p>A related use of this technique involves combining Monte Carlo methods with Markov Chains, and is called (appropriately) Markov Chain Monte Carlo (usually abbreviated MCMC). A full explanation of MCMC is well outside of our scope, but I encourage the reader to check out <a href="http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb?ref=johnwittenauer.net">this notebook</a> for more information (side note: it&apos;s part of a whole series on Bayesian methods that is really good, and well worth your time). In the interest of not adding required reading to understand the next part, I&apos;ll try to briefly summarize the idea behind MCMC.</p>
<p>Like general Monte Carlo methods, MCMC is fundamentally about sampling from a distribution. But unlike before, MCMC is an approach to sampling an unknown distribution, given only some existing samples. MCMC involves using a Markov chain to &quot;search&quot; the space of possible distributions in a guided way. Rather than generating truly random samples, it uses the existing data as a starting point and then &quot;walks&quot; a Markov chain toward a state where the chain (hopefully) converges with the real posterior distribution (i.e. the same distribution that the original sample data came from).</p>
<p>In a sense, MCMC inverts what we saw above. In the dice example, we began with a <strong>distribution</strong> and drew samples to answer some question about that distribution. With MCMC, we <strong>begin</strong> with samples from some <strong>unknown</strong> distribution, and our objective is to approximate, as best we can, the distribution that those samples came from. This way of thinking about it helps to clarify in what situations we need general Monte Carlo methods vs. MCMC. If you already have the &quot;source&quot; distribution and need to answer some question about it, it&apos;s a Monte Carlo problem. However, if all you have is some data but you don&apos;t know the &quot;source&quot;, then MCMC can help you find it.</p>
<p>Let&apos;s see an example to make this more concrete. Imagine we have the result of a series of coin flips and we want to know if the coin being used is unbiased (that is, equally likely to land on heads or tails). How would you determine this from the data alone? Let&apos;s generate a sequence of coin flips from a coin that we know to be biased so we have some data as a starting point.</p>
<pre><code class="language-python">def biased_coin_flip():
    if random.random() &lt;= 0.6:
        return 1
    else:
        return 0

n_trials = 100
coin_flips = [biased_coin_flip() for _ in range(n_trials)]
n_heads = sum(coin_flips)
print(n_heads)
</code></pre>
<pre>
60
</pre>
<p>In this case since we&apos;re producing the data ourselves we know it is biased, but imagine we didn&apos;t know where this data came from. All we know is we have 100 coin flips and 60 are heads. Obviously 60 is greater than 50, and 50/100 is what we would guess if the coin was fair. On the other hand, it&apos;s definitely possible to get 60/100 heads with a fair coin just due to randomness. How do we move from a point estimate to a distribution of the likelihood that the coin is fair? That&apos;s where MCMC comes in.</p>
<pre><code class="language-python">import pymc3 as pm

with pm.Model() as coin_model:
    p = pm.Uniform(&apos;p&apos;, lower=0, upper=1)
    obs = pm.Bernoulli(&apos;obs&apos;, p, observed=coin_flips)
    step = pm.Metropolis()
    trace = pm.sample(100000, step=step)
    trace = trace[5000:]
</code></pre>
<p>Understanding this code requires some background in Bayesian statistics as well as PyMC3. Very simply, we define a prior distribution (<em>p</em>) along with an observed variable (<em>obs</em>) representing our known data. We then configure which algorithm we want to use (Metropolis-Hastings in this case) and initiate the chain. The result is a sequence of values that should, in aggregate, represent the most likely distribution that characterizes the original data.</p>
<p>To see what we ended up with, we can plot the values in a histogram.</p>
<pre><code class="language-python">fig, ax = plt.subplots(figsize=(12, 9))
plt.title(&apos;Posterior distribution of $p$&apos;)
plt.vlines(p_heads, 0, n_trials / 10, linestyle=&apos;--&apos;, label=&apos;true $p$ (unknown)&apos;)
plt.hist(trace[&apos;p&apos;], range=[0.3, 0.9], bins=25, histtype=&apos;stepfilled&apos;, normed=True)
plt.legend()
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2018/04/mc-4.png" alt loading="lazy"></p>
<p>From this result, we can see that the overwhelming likelihood is that the coin is biased (if it was fair then we would expect the &quot;bulk&quot; of the distribution to be around 0.5). To actually derive a concrete probability estimate though, we need to specify a range for which we would consider the result &quot;fair&quot; and integrate over the probability density function (basically the histogram above). For the sake of argument, let&apos;s say that anything between .45-.55 is fair. We can then compute the result using a simple count.</p>
<pre><code class="language-python">import numpy as np

n_fair = len(np.where((trace[&apos;p&apos;] &gt;= 0.45) &amp; (trace[&apos;p&apos;] &lt; 0.55))[0])
n_total = len(trace[&apos;p&apos;])

print(float(n_fair / n_total))
</code></pre>
<pre>
0.16254736842105263
</pre>
<p>By our definition of &quot;fair&quot; above, there&apos;s roughly a 16% chance that the coin is unbiased.</p>
<p>Hopefully these examples provide a good illustration of the power and usefulness of Monte Carlo methods.  As I mentioned at the top, we&apos;re just scratching the surface of this topic (and I&apos;m still learning myself).  One of the more satisfying feelings for me intellectually is learning about some new idea or topic and then realizing that it relates and connects to other things I already know about in all sorts of interesting ways.  I think Monte Carlo methods fit this definition for me, and probably for most readers as well.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The Cryptocurrency Movement]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I wrote the following thoughts in response to some questions from the CIO of the company I work for about mining Bitcoin.  I decided to post my response (lightly edited) here because I think it summarizes my view on cryptocurrencies (and specifically Bitcoin) pretty well.</p>
<p>Currencies or stores of value</p>]]></description><link>https://www.johnwittenauer.net/the-cryptocurrency-movement/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674c</guid><category><![CDATA[Curious Insights]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sat, 24 Feb 2018 17:38:24 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I wrote the following thoughts in response to some questions from the CIO of the company I work for about mining Bitcoin.  I decided to post my response (lightly edited) here because I think it summarizes my view on cryptocurrencies (and specifically Bitcoin) pretty well.</p>
<p>Currencies or stores of value require trust &#x2013; trust that a unit of it will be recognized and accepted by others as a medium of exchange, trust that its supply is limited to prevent arbitrary devaluation, etc.  All known forms of currency before 2008 relied on either centralization (fiat currencies) or physical scarcity (gold, commodities) to establish trust.</p>
<p>Bitcoin, and cryptocurrencies more generally, attempt to do something that has never been possible before &#x2013; how do you create trust in a decentralized, digital system with no top-down control or ownership, in an environment where bits can be copied or manipulated at zero cost?</p>
<p><a href="https://medium.com/@cdixon/why-decentralization-matters-5e3f79f7638e?ref=johnwittenauer.net">Decentralization has a lot of benefits</a>, if you can pull it off.  Under the right conditions, it enables humans to organize and collaborate in fundamentally new ways.  In the long run, it may even disrupt political and social institutions by <a href="https://twitter.com/naval/status/877467629308395521?ref=johnwittenauer.net">replacing networks with markets</a>.  But to do that, one first needs to solve the trust problem.</p>
<p>Bitcoin&#x2019;s answer to this problem is &#x201C;proof of work&#x201D; &#x2013; an algorithm for creating <a href="https://keepingstock.net/explaining-blockchain-how-proof-of-work-enables-trustless-consensus-2abed27f0845?ref=johnwittenauer.net">distributed trustless consensus</a>.  It gets around the double-spend problem (inconsistent ledgers) while also incentivizing validation of transactions on the ledger by using cryptography to require increasingly harder mathematical &#x201C;puzzles&#x201D; be solved to confirm a transaction.</p>
<p>Why increasingly harder?  To prevent malicious actors from manipulating the ledger.  Imposing a scaling cost on adding to the ledger makes it intractable to &#x201C;re-write&#x201D; large portions of the ledger.  In would require compute power almost as big as the entire network.</p>
<p>In order to incentivize network participants to keep doing these &#x201C;puzzles&#x201D;, completing a puzzle rewards a small amount of Bitcoin.  The reward itself has no intrinsic value, but belief in the network assigns it a value by the market.  As the network expands, the &#x201C;conventional&#x201D; value of the reward increases, leading to more mining participation to keep up with the demands of the network in a self-reinforcing feedback loop.</p>
<p>However, there are problems with this model.  Proof of work comes with a societal cost via consumption of other scarce resources (electricity).  Since fiat money can buy compute power, and thus voting power in the network, it can lead to a de-facto centralization.  The blockchain community is well aware of these limitations and a lot of time and effort is being devoted to solving them.  Ethereum, for example, is planning to implement a different verification algorithm called <a href="https://hackernoon.com/what-is-proof-of-stake-8e0433018256?ref=johnwittenauer.net">proof of stake</a> that theoretically eliminates these downsides.  Bitcoin could follow suit eventually, or end up with an <a href="https://lightning.network/?ref=johnwittenauer.net">entirely different solution</a>.</p>
<p>Could Bitcoin get much bigger than it is today?  Yes, absolutely.  Bitcoin&#x2019;s market cap is proportional to the number of believers in the network.  And compared to traditional financial markets, it&#x2019;s tiny.  All Bitcoin combined is worth less than $200 billion.  By comparison, the worldwide value of gold is ~$8 trillion.  Equity markets are $100 trillion.  Currency markets are bigger still.  There are lots of good reasons why Bitcoin probably won&#x2019;t ever get that big, but it might.</p>
<p>Given its potential size, does it make sense to try your hand at mining?  Probably not.  From an economics standpoint, the market is highly efficient.  Participation has been commoditized thanks to easy access to specialized mining hardware.  No barrier to entry, hence no moat.  If the goal is simply understanding rather than financial gain, there&#x2019;s nothing one could learn from mining that couldn&#x2019;t be learned independently.  Everything there is to know about how this stuff works is freely available online.  The code is even open-source.</p>
<p>Many in the tech world view cryptonetworks today as analogous to the early stages of the internet.  The implication, of course, is that the technology will be every bit as impactful as the internet has been, but it may take a while to see it materialize.  They may well be right, but it&apos;s important to emphasize that it&apos;s still really early in the cycle.  The internet went through a funding nuclear winter before it took off, and the same could still happen to crypto.  The possibilities are exciting for sure, but personally I&apos;m trying to temper short-term hype or extrapolations of value and take a long-term view.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Markov Chains From Scratch]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Sometimes it pays to go back to basics. Data science is a massive, complicated field with seemingly endless topics to learn about. But in our rush to learn about the latest deep learning trends, it&apos;s easy to forget that there are simple yet powerful techniques right under our</p>]]></description><link>https://www.johnwittenauer.net/markov-chains-from-scratch/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674b</guid><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Tue, 16 Jan 2018 02:12:08 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Sometimes it pays to go back to basics. Data science is a massive, complicated field with seemingly endless topics to learn about. But in our rush to learn about the latest deep learning trends, it&apos;s easy to forget that there are simple yet powerful techniques right under our noses. In this post we&apos;ll explore one such technique called a Markov chain. By building one from scratch using nothing but standard Python libraries, we&apos;ll see how simplistic they can be while also yielding some cool results.</p>
<p>Markov chains are essentially a way to capture the probability of state transitions in a system. A process can be considered a Markov process if one can make predictions about the future state of the process based solely on its present state (or several of the most recent states for a higher-order Markov process). In other words, the history doesn&apos;t matter beyond a certain point. There are lots of great explainers out there so I&apos;ll leave that for the reader to explore independently (<a href="http://setosa.io/ev/markov-chains/?ref=johnwittenauer.net">this one</a> is my favorite). It will become clearer as we step through the code, so let&apos;s dive in.</p>
<p>For this example we&apos;re going to build a language-based Markov chain. More specifically, we&apos;ll read in a corpus of text and identify pairs of words that appear together. The pairings are sequential such that when a word \(w1\) is followed by a word \(w2\), then we say that the system has a probabilistic state transition from \(w1\) to \(w2\). An example will help. Consider the phrase &quot;the brown fox jumped over the lazy dog&quot;. If we break this down by word pairings, our state transitions would look like this:</p>
<p>the: [brown, lazy]<br>
brown: [fox]<br>
fox: [jumped]<br>
over: [the]<br>
lazy: [dog]</p>
<p>This set of state transitions is called a Markov chain. With this in hand we can now choose a starting point (i.e. a word in the corpus) and &quot;walk the chain&quot; to create a new phrase. Markov chains built in this manner over large amounts of text can produce surprisingly realistic-sounding phrases.</p>
<p>In order to get started we need a corpus of text. Anything sufficiently large will do, but to really have some fun (and at the risk of bringing politics into the mix) we&apos;re going to make Markov chains great again by using <a href="https://github.com/ryanmcdermott/trump-speeches?ref=johnwittenauer.net">this collection of text from Donald Trump&apos;s campaign speeches</a>. Our first step is to import the text file and parse it into words.</p>
<pre><code class="language-python">import urllib2
text = urllib2.urlopen(&apos;https://raw.githubusercontent.com/ryanmcdermott/trump-speeches/master/speeches.txt&apos;)
words = []
for line in text:
    line = line.decode(&apos;utf-8-sig&apos;, errors=&apos;ignore&apos;)
    line = line.encode(&apos;ascii&apos;, errors=&apos;ignore&apos;)
    line = line.replace(&apos;\r&apos;, &apos; &apos;).replace(&apos;\n&apos;, &apos; &apos;)
    new_words = line.split(&apos; &apos;)
    new_words = [word for word in new_words if word not in [&apos;&apos;, &apos; &apos;]]
    words = words + new_words

print(&apos;Corpus size: {0} words.&apos;.format(len(words)))
</code></pre>
<pre>
Corpus size: 166259 words.
</pre>
<p>I did some clean-up by converting it to ASCII and removing line breaks but that&apos;s about it, the rest of the text is just left as it appears in the source file. Our next step is to build the transition probabilities. We&apos;ll represent our transitions as a dictionary where the keys are the distinct words in the corpus and the value for a given key is a list of words that appear after that key. To build the chain we just need to iterate through the list of words, add it to the dictionary if it&apos;s not already there, and add the word proceeding it to the list of transition words.</p>
<pre><code class="language-python">chain = {}
n_words = len(words)
for i, key in enumerate(words):
    if n_words &gt; (i + 1):
        word = words[i + 1]
        if key not in chain:
            chain[key] = [word]
        else:
            chain[key].append(word)

print(&apos;Chain size: {0} distinct words.&apos;.format(len(chain)))
</code></pre>
<pre>
Chain size: 13292 distinct words.
</pre>
<p>It may come as a surprise that we&apos;re just naively inserting words into the transition list without caring if that word had appeared already or not. Won&apos;t we get duplicates, and isn&apos;t that a problem? Yes we will, and no it&apos;s not. Think of this as a simplistic way of representing the transition probability. If a word appears multiple times in the list, and we sample from the list randomly during a transition, there&apos;s a higher likelihood that we pick that word proportional to the number of times it appeared after the key relative to all the other words in the corpus that appeared after that key.</p>
<p>Now that we&apos;ve built our Markov chain, we can get to the fun part - using it to generate phrases! To do this we only need two pieces of information - a starting word, and a phrase length. We&apos;re going to randomly select a starting word from the corpus and make our phrases tweet-length by sampling until our phrase hits 140 characters (assume we&apos;re part of the #never280 crowd). Let&apos;s give it a try.</p>
<pre><code class="language-python">import random
w1 = random.choice(words)
tweet = w1

while len(tweet) &lt; 140:
    w2 = random.choice(chain[w1])
    tweet += &apos; &apos; + w2
    w1 = w2

print(tweet)
</code></pre>
<pre>
Were not going to run by the 93 million people are, where were starting. New Hampshire.&quot; I PROMISE. I do so incredible, and be insulted, Chuck.
</pre>
<p>Not bad! The limitations of using only one word for context are readily apparent though. We can improve it by using a 2nd-order Markov chain instead. This time, instead of using simple word pairings, our &quot;keys&quot; will be the set of distinct tuples of words that appear in the text. Borrowing from the example phrase earlier, a 2nd-order Markov chain for &quot;the brown fox jumped over the lazy dog&quot; would look like:</p>
<p>(the, brown): [fox]<br>
(brown, fox): [jumped]<br>
(fox, jumped): [over]<br>
(jumped, over): [the]<br>
(over, the): [lazy]<br>
(the, lazy): [dog]</p>
<p>In order to build a 2nd-order chain, we have to make a few modifications to the code.</p>
<pre><code class="language-python">chain = {}
n_words = len(words)
for i, key1 in enumerate(words):
    if n_words &gt; i + 2:
        key2 = words[i + 1]
        word = words[i + 2]
        if (key1, key2) not in chain:
            chain[(key1, key2)] = [word]
        else:
            chain[(key1, key2)].append(word)

print(&apos;Chain size: {0} distinct word pairs.&apos;.format(len(chain)))
</code></pre>
<pre>
Chain size: 72373 distinct word pairs.
</pre>
<p>We can do a sanity check to make sure it&apos;s doing what we expect by choosing a word pair that appears somewhere in the text and then examining the transitions in the chain for that pair of words.</p>
<pre><code class="language-python">chain[(&quot;Its&quot;, &quot;so&quot;)]
</code></pre>
<pre>
[&apos;great&apos;,
 &apos;great&apos;,
 &apos;easy.&apos;,
 &apos;preposterous.&apos;,
 &apos;important...&apos;,
 &apos;simple.&apos;,
 &apos;simple.&apos;,
 &apos;horrible.&apos;,
 &apos;out&apos;,
 &apos;terrible.&apos;,
 &apos;sad.&apos;,
 &apos;much&apos;,
 &apos;can&apos;,
 &apos;easy.&apos;,
 &apos;embarrassing&apos;,
 &apos;astronomical&apos;]
</pre>
<p>Looks about like what I&apos;d expect. Next we need to modify the &quot;tweet&quot; code to handle the new design.</p>
<pre><code class="language-python">r = random.randint(0, len(words) - 1)
key = (words[r], words[r + 1])
tweet = key[0] + &apos; &apos; + key[1]

while len(tweet) &lt; 140:
    w = random.choice(chain[key])
    tweet += &apos; &apos; + w
    key = (key[1], w)

print(tweet)
</code></pre>
<pre>
there. They saw it. He talks about medical cards. He talks about fixing the VA health care. They want to talk to me from Georgia? &quot;Dear So and
</pre>
<p>Better! Let&apos;s turn this into a function that we can call repeatedly to see a few more examples.</p>
<pre><code class="language-python">def markov_tweet(chain, words):
    r = random.randint(0, len(words) - 1)
    key = (words[r], words[r + 1])
    tweet = key[0] + &apos; &apos; + key[1]

    while len(tweet) &lt; 140:
        w = random.choice(chain[key])
        tweet += &apos; &apos; + w
        key = (key[1], w)

    print(tweet + &apos;\n&apos;)
</code></pre>
<pre><code class="language-python">markov_tweet(chain, words)
markov_tweet(chain, words)
markov_tweet(chain, words)
markov_tweet(chain, words)
markov_tweet(chain, words)
</code></pre>
<pre>
East. But we have a huge subject. Ive been with the Romney campaign. Guys made tens of thousands of people didnt care about the vets in one hour.

somebody is going to put American-produced steel back into the sky. It will be the candidate. But I think 11 is a huge problem. And Im on the

THAT WE CAN ONLY DREAM ABOUT. THEY HAVE A VERY BIG BEAUTIFUL GATE IN THAT WALL, BIG AND BEAUTIFUL, RIGHT. NO. NO, I DON&apos;T KNOW WHERE THEY HAVE

We need to get so sick of me. I didnt want the world my tenant. They buy condos for tens of millions of dollars overseas. And too many executive

Wont be as good as you know, started going around and were going to win. Were going to happen. Thank you. SPEECH 8 This is serious rifle. This
</pre>
<p>That&apos;s all there is to it! Incredibly simple yet surprisingly effective. It&apos;s obviously not perfect but it&apos;s not complete gibberish either. If you run it enough times you&apos;ll find some combinations that actually sound pretty plausible. These results could probably be improved significantly with a much more powerful technique like a recurrent neural net, but relative to the effort involved it&apos;s hard to beat Markov chains.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Sapiens]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Yuval Noah Harari&apos;s book <a href="https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095?ref=johnwittenauer.net">Sapiens</a>.  All of my favorite books have changed the way I view the world in some non-trivial way.  There&apos;s nothing quite like the experience of reading a book and then, afterward, realizing that you can&apos;t go</p>]]></description><link>https://www.johnwittenauer.net/sapiens/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad674a</guid><category><![CDATA[Book Review]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Fri, 03 Nov 2017 01:43:08 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Yuval Noah Harari&apos;s book <a href="https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095?ref=johnwittenauer.net">Sapiens</a>.  All of my favorite books have changed the way I view the world in some non-trivial way.  There&apos;s nothing quite like the experience of reading a book and then, afterward, realizing that you can&apos;t go back to who you were before you read it because you just see things differently.  <em>Sapiens</em> falls into that category for me.  At first glance, one would expect that this is a book about the inexorable rise of humankind - where we came from, how we evolved to what we are today, and what we were like along the way.  It is that, at least in a sense, but it&apos;s also so much more.  <em>Sapiens</em> goes beyond a simple stating of the facts as we know them and uncovers some deeper (and often inconvenient) truths about humankind.  Harari&apos;s analysis of WHY history unfolded the way it did is more fascinating that the history itself, and paints a much more holistic picture of what it means to be human.</p>
<p>One of the most interesting themes throughout the book is the role that imagination has played in our species&apos; ascendancy on the world stage.  I think most people probably believe that humans have always been at the top of the food chain, but for most of our history we were essentially foragers (or as Harari put it, &quot;an animal of no significance&quot;).  It wasn&apos;t until the &quot;cognitive revolution&quot; around 70,000 years ago that humans moved to the top of the food chain.  The reason seems to be that our new cognitive skills allowed us to coordinate with other humans on a level that the world had not seen before.  What specifically enabled this were two things - new language skills, and the ability to invent fiction.  Our newfound ability to imagine things that aren&apos;t real allowed humans to organize around shared beliefs in myths that we, ourselves, created.</p>
<p>If you think about it, transitioning from a world in which all life operates via basic biological principles to one in which a species can imagine alternate realities is a big deal.  This change arguably led to everything that has come since, to all of the accumulated knowledge that our civilization has amassed in the many generations that followed.  Before this transition, an organism&apos;s operating system was programmed in its DNA.  Some organisms could acquire limited amounts of knowledge from their environment, such as the best places to hunt or tricks to avoid being eaten, but everything else was hard-wired.  With the ability to imagine, humans could rewrite their own operating system.  We could create an idea - a conception of a thing that didn&apos;t exist, and make it reality.</p>
<p>What I find particularly fascinating is Harari&apos;s argument that basically every organizing principle in modern human society falls under the category of a shared belief in fiction.  Religions, laws, nations, companies, human rights - these are all, in a sense, figments of our collective imaginations.  Biology doesn&apos;t take a stance on how humans should treat each other.  There is no physical necessity for the borders we draw around nations, no tangible thread holding together the assets of a corporation.  All of this seems completely obvious upon introspection, but I discovered that until reading this book I had never really thought about it.  I&apos;ve found that the way I view things like political debate has changed as a result.</p>
<p><em>Sapiens</em> delves into numerous topics that one might be surprised to find in a &quot;history&quot; book.  For instance, there&apos;s a chapter that questions whether or not we&apos;re any happier or more fulfilled as a species for all the progress we&apos;ve made.  In another chapter, Harari looks at the trends of the past and begins to speculate about where we&apos;re headed next.  He discusses the role that technologies like artificial intelligence and genetic engineering might play in this future, and whether or not we&apos;ll even still call ourselves <em>Homo sapiens</em>.</p>
<p>Philosophy aside, <em>Sapiens</em> does a great job of covering the major themes in our species&apos; history.  The agricultural revolution, the invention of written language and money, imperialism, the scientific revolution, the rise of capitalism and industry.  The book covers a lot of ground.  Harari&apos;s analysis is often counter-intuitive (and probably controversial), but always thoroughly researched and very well-reasoned.  For example, he called the agricultural revolution &quot;history&apos;s biggest fraud&quot; because for most humans, the transition led to a much harsher and less rewarding life.  He documents the &quot;imagined order&quot; of large societies that placed people into hierarchies, and concludes that the particular structures of these hierarchies are mostly accidents of history.  He argues that scientific research flourished only in alliance with ideologies such as capitalism or imperialism, because these ideologies funded the cost of research.  These conclusions are not immediately obvious but he makes a compelling case.</p>
<p>There was so much packed into this book that I&apos;m really just scratching the surface.   It&apos;s not a quick read but the writing style is very engaging so it doesn&apos;t feel laborious like you might expect for a book with this subject matter.  I would highly recommend it to anyone interested in expanding their knowledge about humankind&apos;s social and cultural evolution.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Time Series Forecasting With Prophet]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://facebookincubator.github.io/prophet/?ref=johnwittenauer.net">Prophet</a> is an open source forecasting tool built by Facebook. It can be used for time series modeling and forecasting trends into the future. Prophet is interesting because it&apos;s both sophisticated and quite easy to use, so it&apos;s possible to generate very good forecasts with relatively</p>]]></description><link>https://www.johnwittenauer.net/time-series-forecasting-with-prophet/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad6749</guid><category><![CDATA[Data Science]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Visualization]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sat, 02 Sep 2017 01:51:51 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://facebookincubator.github.io/prophet/?ref=johnwittenauer.net">Prophet</a> is an open source forecasting tool built by Facebook. It can be used for time series modeling and forecasting trends into the future. Prophet is interesting because it&apos;s both sophisticated and quite easy to use, so it&apos;s possible to generate very good forecasts with relatively little effort or domain knowledge in time series analysis.</p>
<p>There are a few requirements you&apos;ll need to meet in order to use the library. It uses PyStan to do all of its inference, so PyStan has to be installed. PyStan has its own dependencies, including a C++ compiler. Python 3 also appears to be a requirement. Full installation instructions are <a href="https://facebookincubator.github.io/prophet/docs/installation.html?ref=johnwittenauer.net">here</a>.</p>
<p>Let&apos;s take a quick tour through Prophet&apos;s capabilities. We can start by reading in some sample time series data. In this case we&apos;re using Wikipedia page hits for Peyton Manning, which is the data set that Facebook collected for the library&apos;s example code.</p>
<pre><code class="language-python">%matplotlib inline
import os
import pandas as pd
import numpy as np
from fbprophet import Prophet

path = os.path.dirname(os.path.dirname(os.getcwd())) + &apos;/data/manning.csv&apos;
data = pd.read_csv(path)
data[&apos;ds&apos;] = pd.to_datetime(data[&apos;ds&apos;])
data.head()
</code></pre>
<table>
  <thead>
    <tr>
      <th></th>
      <th>ds</th>
      <th>y</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>2007-12-10</td>
      <td>14629</td>
    </tr>
    <tr>
      <th>1</th>
      <td>2007-12-11</td>
      <td>5012</td>
    </tr>
    <tr>
      <th>2</th>
      <td>2007-12-12</td>
      <td>3582</td>
    </tr>
    <tr>
      <th>3</th>
      <td>2007-12-13</td>
      <td>3205</td>
    </tr>
    <tr>
      <th>4</th>
      <td>2007-12-14</td>
      <td>2680</td>
    </tr>
  </tbody>
</table>
<p>There are only two columns in the data, a date and a value. The naming convention of using &apos;ds&apos; for the date and &apos;y&apos; for the value is apparently a requirement to use Prophet; it&apos;s expecting those exact names and will not work otherwise!</p>
<p>Let&apos;s examine the data by plotting it using pandas&apos; built-in plotting function.</p>
<pre><code class="language-python">data.set_index(&apos;ds&apos;).plot(figsize=(12, 9))
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2017/08/download1.png" alt loading="lazy"></p>
<p>The data is highly volatile with order-of-magnitude differences between a typical day and a high-traffic day. This will be hard to model directly. Let&apos;s try applying a log transform to see if that helps.</p>
<pre><code class="language-python">data[&apos;y&apos;] = np.log(data[&apos;y&apos;])
data.set_index(&apos;ds&apos;).plot(figsize=(12, 9))
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2017/08/download2.png" alt loading="lazy"></p>
<p>Much better! Not only is it stationary, but we&apos;ve also revealed what looks like some cyclical patterns in the data. We can now instantiate a Prophet model and fit it to our data.</p>
<pre><code class="language-python">m = Prophet()
m.fit(data)
</code></pre>
<p>That was easy! This is one of the most attractive features of Prophet. It essentially does all of the model selection work for you and gives you a result that works well without much user input required. In this case we didn&apos;t have to specify anything at all, just give it some data and we get a model.</p>
<p>We&apos;ll explore below what the model looks like but it&apos;s worth spending a moment first to explain what&apos;s going on here. Unlike typical time-series methods like ARIMA (which are considered generative models), Prophet uses something called an additive regression model. This is essentially a sophisticated curve-fitting model. I haven&apos;t dug into any of the math, but based on the description in their <a href="https://research.fb.com/prophet-forecasting-at-scale/?ref=johnwittenauer.net">introductory blog post</a>, Prophet builds separate components for the trend, yearly seasonality, and weekly seasonality in the time series (with holidays as an optional fourth component). We can witness this directly by looking at one of the undocumented properties on the model object that shows the fitted parameters.</p>
<pre><code class="language-python">m.params
</code></pre>
<pre>
{u&apos;beta&apos;: array([[ 0.        , -0.03001147,  0.04819977,  0.00999481, -0.00228437,
          0.01252909,  0.01559136,  0.00950633,  0.00075704,  0.00391209,
         -0.00586589,  0.0075454 , -0.00524287,  0.00208091, -0.00477578,
         -0.00410379, -0.0077744 , -0.00081338,  0.00125811,  0.00187115,
          0.0069828 , -0.01233829, -0.01057246,  0.00938595,  0.00847051,
          0.00088024, -0.00352237]]),
 u&apos;delta&apos;: array([[  1.62507395e-07,   1.29092081e-08,   3.48169254e-01,
           4.57815903e-01,   1.61826714e-07,  -5.66144938e-04,
          -2.34969389e-01,  -2.46905754e-01,   9.96595883e-08,
          -1.82605683e-07,   6.12381739e-08,   2.78653912e-01,
           2.30631082e-01,   2.83118248e-03,   1.55276178e-03,
          -8.61134360e-01,  -3.14239669e-07,   5.54456073e-09,
           4.91423429e-07,   4.71475093e-01,   7.93935609e-03,
           1.36547372e-07,  -3.38274613e-01,  -3.20008088e-07,
           1.16410210e-07]]),
 u&apos;gamma&apos;: array([[ -5.37486490e-09,  -8.40863029e-10,  -3.59567303e-02,
          -6.19588853e-02,  -2.69802216e-08,   1.12158987e-04,
           5.44799089e-02,   6.53304459e-02,  -2.95648930e-08,
           6.03344459e-08,  -2.21556944e-08,  -1.09561865e-01,
          -9.78411305e-02,  -1.28994139e-03,  -7.57253043e-04,
           4.47568989e-01,   1.73293155e-07,  -3.23167613e-09,
          -3.01853068e-07,  -3.04398195e-01,  -5.37507537e-03,
          -9.67767399e-08,   2.50366597e-01,   2.46999155e-07,
          -9.35053320e-08]]),
 u&apos;k&apos;: array([[-0.35578215]]),
 u&apos;m&apos;: array([[ 0.62604285]]),
 u&apos;sigma_obs&apos;: array([[ 0.03759107]])}
</pre>
<p>I think the beta, delta, and gamma arrays correspond to the distributions for the three different components. The way I think about this is we&apos;re saying we have three different regression models with some unknown set of parameters, and we want to find the combination of those models that best explains the data. We can attempt to do this using maximum a-priori (MAP) estimation, where our priors are the equations for the regression components (piecewise linear for the trend, Fourier series for the seasonal component, and so on). This appears to be what Prophet is doing. I can&apos;t say I&apos;ve looked at it in any great detail so part of that explanation could be wrong, but I think it&apos;s broadly correct.</p>
<p>Now that we have a model, let&apos;s see what we can do with it. The obvious place to start is to forecast what we think our value will be for some future dates. Prophet makes this easy with a helper function.</p>
<pre><code class="language-python">future_data = m.make_future_dataframe(periods=365)
future_data.tail()
</code></pre>
<table>
  <thead>
    <tr>
      <th></th>
      <th>ds</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>3265</th>
      <td>2007-01-15</td>
    </tr>
    <tr>
      <th>3266</th>
      <td>2007-01-16</td>
    </tr>
    <tr>
      <th>3267</th>
      <td>2007-01-17</td>
    </tr>
    <tr>
      <th>3268</th>
      <td>2007-01-18</td>
    </tr>
    <tr>
      <th>3269</th>
      <td>2007-01-19</td>
    </tr>
  </tbody>
</table>
<p>That gives us a data frame with dates going one year forward from where our data ends. We can then use the &quot;predict&quot; function to populate this data frame with forecast information.</p>
<pre><code class="language-python">forecast = m.predict(future_data)
forecast.columns
</code></pre>
<pre>
Index([u&apos;ds&apos;, u&apos;t&apos;, u&apos;trend&apos;, u&apos;seasonal_lower&apos;, u&apos;seasonal_upper&apos;,
       u&apos;trend_lower&apos;, u&apos;trend_upper&apos;, u&apos;yhat_lower&apos;, u&apos;yhat_upper&apos;, u&apos;weekly&apos;,
       u&apos;weekly_lower&apos;, u&apos;weekly_upper&apos;, u&apos;yearly&apos;, u&apos;yearly_lower&apos;,
       u&apos;yearly_upper&apos;, u&apos;seasonal&apos;, u&apos;yhat&apos;],
      dtype=&apos;object&apos;)
</pre>
<p>The point estimate forecasts are in the &quot;yhat&quot; column, but note how many columns got added. In addition to the forecast itself we also have point estimates for each of the components, as well as upper and lower bounds for each of these projections. That&apos;s a lot of detail provided out-of-the-box just by calling a single function!</p>
<p>Let&apos;s see an example.</p>
<pre><code class="language-python">forecast[[&apos;ds&apos;, &apos;yhat&apos;, &apos;yhat_lower&apos;, &apos;yhat_upper&apos;]].tail()
</code></pre>
<table>
  <thead>
    <tr>
      <th></th>
      <th>ds</th>
      <th>yhat</th>
      <th>yhat_lower</th>
      <th>yhat_upper</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>3265</th>
      <td>2007-01-15</td>
      <td>8.200620</td>
      <td>7.493151</td>
      <td>8.886727</td>
    </tr>
    <tr>
      <th>3266</th>
      <td>2007-01-16</td>
      <td>8.525638</td>
      <td>7.791967</td>
      <td>9.266697</td>
    </tr>
    <tr>
      <th>3267</th>
      <td>2007-01-17</td>
      <td>8.313019</td>
      <td>7.620597</td>
      <td>9.000529</td>
    </tr>
    <tr>
      <th>3268</th>
      <td>2007-01-18</td>
      <td>8.145577</td>
      <td>7.449701</td>
      <td>8.870133</td>
    </tr>
    <tr>
      <th>3269</th>
      <td>2007-01-19</td>
      <td>8.157476</td>
      <td>7.467178</td>
      <td>8.860933</td>
    </tr>
  </tbody>
</table>
<p>Prophet also supplies several useful plotting functions. The first one is just called &quot;plot&quot;, which displays the actual values along with the estimates. For the forecast period it only displays the projections since we don&apos;t have actual values for this period.</p>
<pre><code class="language-python">m.plot(forecast);
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2017/08/download3.png" alt loading="lazy"></p>
<p>I found this to be a bit confusing because the data frame we passed in only contained the &quot;forecast&quot; date range, so where did the rest of it come from? I think the model object is storing the data it was trained on and using it as part of this function, so it looks like it will plot the whole date range regardless.</p>
<p>We can use another built-in plot to show each of the individual components. This is quite useful to visually inspect what the model is capturing from the data. In this case there are a few clear takeaways such as higher activity during football season or increased activity on Sunday &amp; Monday.</p>
<pre><code class="language-python">m.plot_components(forecast);
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2017/08/download4.png" alt loading="lazy"></p>
<p>In addition to the above components, Prophet can also incorporate possible effects from holidays. Holidays and dates for each holiday have to be manually specified over the entire range of the data set (including the forecast period). The way holidays get defined and incorporated into the model is fairly simple. Below are some holiday definitions for our current data set that include Peyton Manning&apos;s playoff and Superbowl appearances (taken from the example code).</p>
<pre><code class="language-python">playoffs = pd.DataFrame({
  &apos;holiday&apos;: &apos;playoff&apos;,
  &apos;ds&apos;: pd.to_datetime([&apos;2008-01-13&apos;, &apos;2009-01-03&apos;, &apos;2010-01-16&apos;,
                        &apos;2010-01-24&apos;, &apos;2010-02-07&apos;, &apos;2011-01-08&apos;,
                        &apos;2013-01-12&apos;, &apos;2014-01-12&apos;, &apos;2014-01-19&apos;,
                        &apos;2014-02-02&apos;, &apos;2015-01-11&apos;, &apos;2016-01-17&apos;,
                        &apos;2016-01-24&apos;, &apos;2016-02-07&apos;]),
  &apos;lower_window&apos;: 0,
  &apos;upper_window&apos;: 1,
})

superbowls = pd.DataFrame({
  &apos;holiday&apos;: &apos;superbowl&apos;,
  &apos;ds&apos;: pd.to_datetime([&apos;2010-02-07&apos;, &apos;2014-02-02&apos;, &apos;2016-02-07&apos;]),
  &apos;lower_window&apos;: 0,
  &apos;upper_window&apos;: 1,
})

holidays = pd.concat((playoffs, superbowls))
</code></pre>
<p>Once we have holidays defined in a data frame, using them in the model is just a matter of passing in the data frame as a parameter when we define the model.</p>
<pre><code class="language-python">m = Prophet(holidays=holidays)
forecast = m.fit(data).predict(future_data)
m.plot_components(forecast);
</code></pre>
<p><img src="https://www.johnwittenauer.net/content/images/2017/08/download5.png" alt loading="lazy"></p>
<p>Our component plot now includes a holidays component with spikes indicating the magnitude of influence those holidays have on the value.</p>
<p>While the Prophet library itself is very powerful, there are some useful features that we&apos;d typically want when doing time series modeling that it currently doesn&apos;t provide. One very simple and obvious thing that&apos;s needed is a way to evaluate the forecasts. We can do this ourselves using scikit-learn&apos;s metrics (you could also calculate it yourself). Note that since we took the natural log of the series earlier we need to reverse that to get a meaningful number.</p>
<pre><code class="language-python">from sklearn.metrics import mean_absolute_error
data = m.predict(data)
mean_absolute_error(np.exp(data[&apos;y&apos;]), np.exp(data[&apos;yhat&apos;]))
</code></pre>
<pre>
2436.9620410194648
</pre>
<p>That works fine as a very simple example, but for real applications we&apos;d probably want something more robust like cross-validation over sliding windows of the data set. Currently in order to accomplish this we&apos;d have to implement it ourselves.</p>
<p>Another limitation is the lack of ability to incorporate additional information into the model. One can imagine variables that could be used along with the time series to further improve the forecast (for example, a variable indicating if Peyton Manning had just won a game, or had a particularly good performance, or appeared in some news articles). We can&apos;t do anything like this with Prophet directly. However, one idea I&apos;ve experimented with in the past that may get around this limitation is building a two-stage model. The first stage is the Prophet model, and we use that to generate predictions. The second stage is a normal regression model that includes the additional signals as independent variables. The wrinkle is that instead of predicting the target directly, we predict the <strong>error</strong> from the time series model. When you put the two together, this may result in an even better overall forecast.</p>
<p>All things considered, Prophet is a great addition to the toolbox for time series problems. There are a number of knobs and dials that one can tweak that I didn&apos;t get into because I still haven&apos;t tried them out, but they provide options for advanced users to improve their forecasts even further. It&apos;s worth cautioning that this software is fairly immature so proceed carefully if using it for any serious tasks. That said, the authors claim Facebook uses it extensively so take that for what it&apos;s worth.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Zero To One]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Peter Thiel&apos;s book <a href="https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296?ref=johnwittenauer.net">Zero to One</a>.  I&apos;m generally fascinated by how very smart people view the world, particularly if those views are unpopular.  Peter Thiel, a well-known entrepreneur/investor and famously contrarian thinker, definitely fits into this category.  In this book, Peter</p>]]></description><link>https://www.johnwittenauer.net/zero-to-one/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad6748</guid><category><![CDATA[Book Review]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Mon, 26 Jun 2017 00:15:25 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Peter Thiel&apos;s book <a href="https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296?ref=johnwittenauer.net">Zero to One</a>.  I&apos;m generally fascinated by how very smart people view the world, particularly if those views are unpopular.  Peter Thiel, a well-known entrepreneur/investor and famously contrarian thinker, definitely fits into this category.  In this book, Peter lays out his perspective on building the future.  The central thesis is the idea that progress doesn&apos;t happen on its own - someone has to make it happen.  This is where the &quot;zero to one&quot; phrase from the title comes in.  It refers to the fact that most things are simply copying or iterating on something that&apos;s already been done (1 to <em>n</em>).  It takes genius and courage to invent transformative new technology that creates abundance and prosperity (0 to 1).</p>
<p>In Peter&apos;s view this technology is most likely to come from startups (as he put it, a startup is the largest group of people you can convince of a plan to build a different future).  With that perspective in mind, much of the book focuses on lessons about how to &quot;build the future&quot; through a startup.  But I think the main points of the book are valuable whether you have any interest in startups or not.  They offer new ways of thinking about technology, markets, and competition.  It&apos;s bold, contrarian, and thought-provoking.  Here are some of the key insights.</p>
<h4 id="competitionmonopoly">Competition &amp; Monopoly</h4>
<p>One of the most notable threads that gets touched on often is the nature of competition.  Most economists view perfectly competitive markets as capitalism working as intended.  Competition causes each participant to raise their game, lower prices, deliver greater value etc. in the pursuit of customers and profit.  But Peter argues that competition and capitalism are actually opposites.  Competitive markets erode profits until no one is making any money.  When margins are very thin and profit is hard to come by, companies can&apos;t afford to do anything except fight for market share.  They&apos;re trapped by short-term thinking.</p>
<p>The opposite state is when a company has a monopoly - complete dominance of a market protected by a huge moat.  Most of us would say that monopolies are bad.  Monopolies allow companies to charge high prices, slack off on customer service, cut R&amp;D investment, and lots of other bad things.  But Thiel argues this is only true in a world where nothing changes.  In dynamic, technology-driven markets, creative monopolies can actually be good for society because they have both the incentive and the available cash flow to invest enormous resources into inventing new technologies.  Rather than rent-seeking, this class of monopoly creates new categories of abundance.</p>
<p>Thiel&apos;s perspective on competition and monopoly is a core component of his advice for building a company.  If competition is problematic then the best thing to do is start by owning a small market as a monopoly and expand to adjacent markets from there.  This sounds easy but is very hard in practice, which is why few companies achieve it.  Companies that do gain a monopoly are able to build some durable, lasting advantage in their market.  This usually comes in the form of proprietary technology, network effects, economies of scale, and branding.  Some combination of these advantages can generate significant long-term value.</p>
<p>My own perspective is that having a monopoly isn&apos;t binary.  Nor are markets entirely static or dynamic.  These things exist on a continuous spectrum.  All companies are probably somewhere between perfect monopoly and perfect competition in any given market, no matter how that market is defined.  And all monopolies probably cause some harm to consumers, even if they also lead to some new inventions.  It&apos;s useful to speak about these things with such rigidity in the abstract to make a point, but the real world is messy.  That doesn&apos;t mean he&apos;s wrong, just that there&apos;s probably more to the story in most cases.</p>
<h4 id="easyhardorimpossible">Easy, Hard, or Impossible</h4>
<p>Another theme from the book that I found really interesting was the trichotomy between easy, hard, and impossible.  Thiel argues that most things are either easy or impossible to accomplish.  Easy things have already been done, and impossible things can never be done no matter how hard we try.  However, there are some things that are hard but possible.  In some cases these hard things were previously impossible but have become possible thanks to new technology.  Hard things that are possible to achieve are &quot;secrets&quot;.  They&apos;re truths about the world that most people do not know.  This is because common knowledge about what is possible changes much slower than what is possible in reality.  Secrets can come in many forms.  They can be about the natural world or they can be about people.  These hard things, these &quot;secrets&quot;, are what great companies are built on.  Companies doing hard things  are a shared conspiracy to change the world, founded on a secret known only to those on the inside.</p>
<p>This is a pretty radical view of what it means to be part of a company, however I think it makes sense.  Most of us aren&apos;t part of a conspiracy to change the world because we&apos;re not working for companies doing hard things.  We&apos;re all probably aware of companies that seem to fit this description though.  Tesla would be a good example.  Tesla is trying to bring about the end of the fossil fuels era by making electric cars mainstream - hard, but possible.  Tesla&apos;s employees generally appear to be fanatical about the company&apos;s mission.  Many on the outside probably don&apos;t think Tesla will succeed in its goal, but I bet most of its employees do.  Their &quot;secret&quot;, though widely known, still may not be widely believed.</p>
<p>Most companies that try to do something hard end up failing.  They can fail for a variety of reasons, many of which Peter talks about in the book (culture, team, incentives alignment, distribution, and so on).  The distribution of companies that try to do hard things and succeed vs. those that fail probably looks like a power law.  This explains why most secrets never see the light of day (and why the secret about hard things remains relatively unknown).</p>
<h4 id="salespersuasion">Sales &amp; Persuasion</h4>
<p>The last theme in the book that I found interesting has to do with salesmanship and persuasion.  As an engineer I&apos;m naturally distrustful of anything that involves &quot;selling&quot;.  It just feels messy, ambiguous, and somehow dishonest.  I think Peter actually changed my mind though.  He addressed this very point in the book about engineers underrating sales.  It makes sense to talk about sales from the standpoint of building a company.  Every successful product or service needs a distribution channel.  What intrigued me was thinking about sales in a much more general capacity.  Sales is just the art of persuasion.  In some sense, we&apos;re all selling all the time because persuading people is a normal part of human life.  When I really think about it, almost every conversation I have at work or at home involves some amount of persuasion, no matter how mundane.  In hindsight, it seems obvious that this is a skill that one can improve on just like anything else.</p>
<p>One other point that Thiel makes is that great salesman are hidden from sight because it&apos;s not obvious that they&apos;re selling something.  He points out the Elon Musk, widely thought of as the consummate engineer, is also a grandmaster salesman.  If you think about it this actually makes a lot of sense.  Elon is able to get some of the smartest, most driven people in the world to work at his companies.  He got a massive loan from the government to build electric cars at a time when the world economy was collapsing.  Most of the developed world thinks he&apos;s the real-life version of Iron Man.  These are the marks of someone who knows how to persuade.</p>
<p>This was one of the most information-dense books I&apos;ve ever read.  Most books use a lot of words to say very little.  &quot;Zero to One&quot; completely inverts this norm.  Nassim Taleb recommended that everyone read this book three times.  I&apos;ve gone through it twice (it&apos;s a quick read) and I&apos;m still picking up new things from it.  I think he may be on to something.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Thinking, Fast And Slow]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Daniel Kahneman&apos;s book <a href="https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555/?ref=johnwittenauer.net">Thinking, Fast and Slow</a>.  This is a book that everyone should probably read at some point in their lives, or at least read some cliff notes on to get an understanding of the basic ideas.  The reason is because it gets</p>]]></description><link>https://www.johnwittenauer.net/thinking-fast-and-slow/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad6747</guid><category><![CDATA[Book Review]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sat, 06 May 2017 19:09:27 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is about Daniel Kahneman&apos;s book <a href="https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555/?ref=johnwittenauer.net">Thinking, Fast and Slow</a>.  This is a book that everyone should probably read at some point in their lives, or at least read some cliff notes on to get an understanding of the basic ideas.  The reason is because it gets at some fundamental truths about how humans function that are both non-obvious and extremely hard to recognize, even after learning about them.  These truths apply to virtually any context one can imagine and are not limited in their relevancy to any particular discipline or field of study.</p>
<p>The central thesis is a dichotomy between two modes of thought - &quot;System 1&quot; and &quot;System 2&quot;.  &quot;System 1&quot; is fast, instinctive, and effortless.  It happens automatically without us even realizing it.  &quot;System 2&quot; is slow, deliberate, and logical.  It involves effort and focus.  In reality there is no physical distinction between the two modes of thought, it&apos;s all happening inside our brains at the same time, but it&apos;s a useful abstraction because it captures all the ways that &quot;System 1&quot; thinking can mislead us.  The way I think about it is &quot;System 1&quot; is a filter on the raw sensory input we&apos;re taking in every second.  If we think about it in stream processing terms, &quot;System 1&quot; is reading from the raw input stream and applying successively higher-order functions to the stream so we can quickly make sense of the input.  It&apos;s what allows us to recognize faces, to read a book, to drive a car, or to catch a ball.  These things don&apos;t require pupil-dilating mental effort on our part - they just happen.  We might be aware that they&apos;re happening, but it&apos;s a passive awareness.  We don&apos;t have to expend mental energy to make it so.</p>
<p>&quot;System 1&quot; thinking is absolutely essential for us to be able to function on a basic level.  But it can also lead us astray.  The filters being applied automatically to our input stream lead to thoughts and actions that frequently deviate from the <a href="https://en.wikipedia.org/wiki/Rational_agent?ref=johnwittenauer.net">rational agent</a> model of human behavior.  They lead to all sorts of cognitive biases such as <a href="https://en.wikipedia.org/wiki/Anchoring?ref=johnwittenauer.net">anchoring</a>, <a href="https://en.wikipedia.org/wiki/Availability_heuristic?ref=johnwittenauer.net">availability</a>, <a href="https://en.wikipedia.org/wiki/Attribute_substitution?ref=johnwittenauer.net">substitution</a>, <a href="https://en.wikipedia.org/wiki/Framing_effect_(psychology)?ref=johnwittenauer.net">framing</a>, and <a href="https://en.wikipedia.org/wiki/Overconfidence_effect?ref=johnwittenauer.net">overconfidence</a>.  These effects can be overcome with deliberate thought and effort - in other words, engaging &quot;System 2&quot; thinking to come up with an objective, logical conclusion.  The problem is, these cognitive biases kick in so often and with such ease that it&apos;s hard to even recognize when such an error in judgement has occurred.</p>
<p>It&apos;s jarring to see just how often we&apos;re unconsciously influenced by cognitive biases, and how dramatic of an effect they can have.  In one example, Kahneman was illustrating our tendency to accept a default option if one is available.  He cited organ donor statistics in two different countries.  In one country, the rate of civilians that opted to be an organ donor was something like 90%.  In another, similar country the rate was 4%.  So what accounted for the difference?  In the first country you had to opt OUT of being a donor, and in the second country you had to opt IN.  That was it.  This is a startling conclusion.  It&apos;s possible that there were confounding factors that could account for some of this variation, but it&apos;s just one example of an effect that&apos;s been observed in many different contexts with a high degree of statistical significance.</p>
<p>The reason I think it would be valuable for everyone to read this book is because understanding these biases likely leads to clearer thinking overall.  It may be impossible, or even undesirable, to recognize every time our &quot;System 1&quot; thinking is making judgments that objectively do not make logical sense.  But with some effort it&apos;s certainly possible to weed out the worst offenses and recognize when we&apos;re making egregious errors.  In my own mental models, I&apos;ve incorporated what I learned from Kahneman into my generally <a href="https://en.wikipedia.org/wiki/Bayesian_inference?ref=johnwittenauer.net">Bayesian</a> approach to rational thought.  I think this is a fairly sensible way to approach solving problems and making decisions.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[How To Learn Hadoop For Free]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The &quot;big data&quot; technology landscape is changing really, really fast.  One consequence of this is that it&apos;s hard to find good training resources since they become outdated so quickly.  I wanted to get some baseline comfort with a variety of technologies in the Hadoop ecosystem but</p>]]></description><link>https://www.johnwittenauer.net/how-to-learn-hadoop-for-free/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad6746</guid><category><![CDATA[Big Data]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[Software Development]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sun, 02 Apr 2017 17:31:55 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The &quot;big data&quot; technology landscape is changing really, really fast.  One consequence of this is that it&apos;s hard to find good training resources since they become outdated so quickly.  I wanted to get some baseline comfort with a variety of technologies in the Hadoop ecosystem but found my options for thorough, guided education somewhat lacking.  I eventually settled on MapR&apos;s <a href="http://learn.mapr.com/?ref=johnwittenauer.net">free training courses</a>.  Each one is like a miniature version of on online course (most require only a few hours of time).  They include interactive video content, quizzes, and various labs to complete using the MapR sandbox.  There&apos;s a fairly wide range of courses and the content is very professional.</p>
<p>Below is a brief synopsis of the courses they offer.  They are completely free to try out - just follow the link above, create an account, and register for the course you&apos;re interested in.  In addition, I put all of the content for the courses I worked through (including labs with example code) in <a href="https://github.com/jdwittenauer/hadoop-training?ref=johnwittenauer.net">a github repo</a>.</p>
<p>Note that due to the fast-paced rate of change that I alluded to earlier (and MapR&apos;s vested interest in staying current) the course catalog will likely evolve over time.  It&apos;s possible that this post will become outdated fairly quickly, although I&apos;ll try to revisit it periodically to make sure the guidance is still relevant.  I should also note that are there snippets of content throughout the training that are specific to the MapR platform, however more than 90% of it is platform-agnostic.</p>
<p>This list is not exhaustive.  It only includes the courses that I spent time working on.  Feel free to visit the <a href="http://learn.mapr.com/?ref=johnwittenauer.net">landing page</a> for a complete list of courses.</p>
<h3 id="hadoopessentials">Hadoop Essentials</h3>
<p>These are short, introductory courses that present a very high-level overview of the Hadoop ecosystem.</p>
<p><a href="http://learn.mapr.com/ess-100-introduction-to-big-data?ref=johnwittenauer.net">ESS 100 - Introduction to Big Data</a><br><br>
<a href="http://learn.mapr.com/ess-105-apache-hadoop-essentials?ref=johnwittenauer.net">ESS 101 - Apache Hadoop Essentials</a><br><br>
<a href="http://learn.mapr.com/ess-110-mapr-converged-data-platform-essentials?ref=johnwittenauer.net">ESS 102 - MapR Converged Data Platform Essentials</a><br></p>
<h3 id="mapreduce">MapReduce</h3>
<p><a href="https://en.wikipedia.org/wiki/MapReduce?ref=johnwittenauer.net">MapReduce</a> is how it all got started, and is still used quite a bit.  MapReduce is a programming model for distributing work over very large data sets across a cluster of machines.  The name comes from the two principal steps involved in the process - map (filtering, sorting etc.) and reduce (summary operations).</p>
<p><a href="http://learn.mapr.com/dev-301-developing-hadoop-applications?ref=johnwittenauer.net">DEV301 - Developing Hadoop Applications</a><br></p>
<h3 id="hbase">HBase</h3>
<p><a href="https://en.wikipedia.org/wiki/Apache_HBase?ref=johnwittenauer.net">HBase</a> is an open-source, non-relational, distributed column-store database written in Java.  HBase is very widely used as an alternative to relational databases for certain types of applications where scale is an issue.</p>
<p><a href="http://learn.mapr.com/dev-320-hbase-data-model-and-architecture?ref=johnwittenauer.net">DEV320 - HBase Data Model and Architecture</a><br><br>
<a href="http://learn.mapr.com/dev-325-hbase-schema-design?ref=johnwittenauer.net">DEV325 - HBase Schema Design</a><br><br>
<a href="http://learn.mapr.com/dev-330-developing-hbase-applications-basics?ref=johnwittenauer.net">DEV330 - Developing HBase Applications: Basics</a><br><br>
<a href="http://learn.mapr.com/dev-335-developing-hbase-applications-advanced?ref=johnwittenauer.net">DEV335 - Developing HBase Applications: Advanced</a><br><br>
<a href="http://learn.mapr.com/dev-340-apache-hbase-bulk-loading-performance-and-security?ref=johnwittenauer.net">DEV340 - HBase Bulk Loading, Performance, and Security</a><br></p>
<h3 id="spark">Spark</h3>
<p><a href="https://en.wikipedia.org/wiki/Apache_Spark?ref=johnwittenauer.net">Spark</a> is an open-source cluster-computing framework that runs on Hadoop.  I&apos;ve <a href="https://www.johnwittenauer.net/why-spark-may-be-even-bigger-than-the-hype/">written about Spark</a> in the past.  Suffice to say that it is a very exciting (and very popular) framework.</p>
<p><a href="http://learn.mapr.com/dev-360-apache-spark-essentials?ref=johnwittenauer.net">DEV360 - Spark Essentials</a><br><br>
<a href="http://learn.mapr.com/dev-361-build-and-monitor-apache-spark-applications?ref=johnwittenauer.net">DEV361 - Build and Monitor Spark Applications</a><br><br>
<a href="http://learn.mapr.com/dev-362-create-data-pipelines-using-apache-spark?ref=johnwittenauer.net">DEV362 - Create Data Pipeline Using Spark</a><br></p>
<h3 id="drill">Drill</h3>
<p><a href="https://en.wikipedia.org/wiki/Apache_Drill?ref=johnwittenauer.net">Drill</a> is an open-source framework for querying semi-structured and unstructured data at scale using SQL-like syntax.  I haven&apos;t seen a lot of interest in this outside of the MapR distribution but it&apos;s a mature technology that has a lot of potential.</p>
<p><a href="http://learn.mapr.com/da-410-apache-drill-essentials?ref=johnwittenauer.net">DA410 - Drill Essentials</a><br><br>
<a href="http://learn.mapr.com/da-415-apache-drill-architecture?ref=johnwittenauer.net">DA415 - Drill Architecture</a><br></p>
<h3 id="hive">Hive</h3>
<p><a href="https://en.wikipedia.org/wiki/Apache_Hive?ref=johnwittenauer.net">Hive</a> is a data warehousing infrastructure built on top of Hadoop that provides the capability to query data in the Hadoop file system using SQL-like syntax.  There&apos;s some conceptual overlap between Hive, HBase and Drill that requires some background and context to understand.  The relevant courses do a good job of clarifying these relationships.</p>
<p><a href="http://learn.mapr.com/da-440-apache-hive-essentials?ref=johnwittenauer.net">DA440 - Hive Essentials</a><br></p>
<h3 id="pig">Pig</h3>
<p><a href="https://en.wikipedia.org/wiki/Pig_(programming_tool)?ref=johnwittenauer.net">Pig</a> is a high-level programming language and framework for doing ETL (extract, transform, and load) tasks with data.  I&apos;m not sure how much Pig is used anymore with newer technologies like Spark offering similar capabilities, but I think there is still a use case for it.</p>
<p><a href="http://learn.mapr.com/da-450-apache-pig-essentials?ref=johnwittenauer.net">DA450 - Pig Essentials</a><br></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[20 Podcasts Worth Listening To]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I have a confession to make.  I love podcasts.  I have a semi-unhealthy addiction to podcasts.  I get most of my twitter follows and book recommendations from podcasts.  They&apos;ve become an essential part of my daily information diet.  The medium is just so good.  There&apos;s no</p>]]></description><link>https://www.johnwittenauer.net/20-podcasts-worth-listening-to/</link><guid isPermaLink="false">5bc3d7065abd4d0017ad6745</guid><category><![CDATA[Random Thoughts]]></category><dc:creator><![CDATA[John Wittenauer]]></dc:creator><pubDate>Sat, 04 Mar 2017 18:34:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I have a confession to make.  I love podcasts.  I have a semi-unhealthy addiction to podcasts.  I get most of my twitter follows and book recommendations from podcasts.  They&apos;ve become an essential part of my daily information diet.  The medium is just so good.  There&apos;s no better way to absorb raw, unfiltered information from interesting people with unique perspectives.  And despite its nascent beginnings, the selection of great podcasts to listen to has just exploded.  There&apos;s way too much good content out there to keep up with!</p>
<p>Below is my curated list of podcasts to try out.  This list is obviously biased toward my own fields of interest.  If you happen to be an art enthusiast or love pop culture, this list will be next to useless for you.  But I&apos;m guessing if you&apos;re reading this blog then we have at least SOME shared interests.  Even if it&apos;s not your thing, give one or two a try - you might be surprised.</p>
<p>Without further ado, here are 20 awesome podcasts that you should definitely check out.</p>
<h4 id="andreessenhorowitz">Andreessen Horowitz</h4>
<p><a href="http://a16z.com/podcasts/?ref=johnwittenauer.net">http://a16z.com/podcasts/</a></p>
<p>Perfect for technology nuts and wannabe-entrepreneurs.  The a16z staff hosts prominent authors, investors, CEOs etc. to talk emerging technology and how the business of technology is being disrupted.  For instance, a few recent episodes covered VR for storytelling, genomics, and the next evolution of cities.</p>
<h4 id="conversationswithtyler">Conversations With Tyler</h4>
<p><a href="https://medium.com/conversations-with-tyler/all?ref=johnwittenauer.net">https://medium.com/conversations-with-tyler/all</a></p>
<p>Tyler Cowen hosts various guests to discuss a wide range of topics, usually with some angle relating to macro-economics.  It&apos;s actually pretty hard to be more specific than that, they&apos;re kind of all over the place.</p>
<h4 id="exponent">Exponent</h4>
<p><a href="http://exponent.fm/?ref=johnwittenauer.net">http://exponent.fm/</a></p>
<p>Ben Thompson (the &quot;Stratechery&quot; guy) and his co-host James Allworth discuss the business and strategy of technology, with the occasional diversion into bigger-picture topics in society and politics.  Ben is really, really good at understanding how tech companies operate and where they&apos;re headed.  It&apos;s kind of like getting a business school degree, but better (and for free).</p>
<h4 id="hiddenbrain">Hidden Brain</h4>
<p><a href="http://www.npr.org/podcasts/510308/hidden-brain?ref=johnwittenauer.net">http://www.npr.org/podcasts/510308/hidden-brain</a></p>
<p>Consider it a weekly lesson in psychology, sociology and human behavior, using real-world stories.  I mean, everyone likes stories right?</p>
<h4 id="investlikethebest">Invest Like The Best</h4>
<p><a href="http://investorfieldguide.com/?ref=johnwittenauer.net">http://investorfieldguide.com/</a></p>
<p>Ostensibly it&apos;s about investing, but a lot of the guests aren&apos;t investors at all so it&apos;s hard to pigeonhole.  Consider it more an exercise in learning how to think and cultivate curiosity, while also learning about things like hedge funds, venture capital, value investing etc.</p>
<h4 id="oreillydatashow">O&apos;Reilly Data Show</h4>
<p><a href="https://www.oreilly.com/topics/oreilly-data-show-podcast?ref=johnwittenauer.net">https://www.oreilly.com/topics/oreilly-data-show-podcast</a></p>
<p>For big data nerds.  Actually most episodes lately are about AI, because fucking everyone in tech now has to talk about AI at every opportunity.  But it&apos;s also about big data.  Fairly in-the-weeds discussion, and a lot of emphasis on open-source projects.  It&apos;s a great way to stay up-to-date on the big data landscape.</p>
<h4 id="partiallyderivative">Partially Derivative</h4>
<p><a href="http://partiallyderivative.com/?ref=johnwittenauer.net">http://partiallyderivative.com/</a></p>
<p>For data science nerds.  The crew talks data and machine learning while drinking obscure artisanal beer.  Hilarty (and learning) ensues.  They&apos;ve also had a lot of good interviews with various data scientists lately (although they&apos;re slacking a bit on the beer).</p>
<h4 id="radiolab">Radiolab</h4>
<p><a href="http://www.radiolab.org/series/podcasts/?ref=johnwittenauer.net">http://www.radiolab.org/series/podcasts/</a></p>
<p>Radiolab is just awesome.  Not sure how else to put it.  They do episodes on all sorts of topics (the latest one on CRISPR was really good).  Production quality is super high.  The topics are also really accessible.  If you&apos;re new to podcasting this is a great place to start.</p>
<h4 id="rationallyspeaking">Rationally Speaking</h4>
<p><a href="http://rationallyspeakingpodcast.org/archive/?ref=johnwittenauer.net">http://rationallyspeakingpodcast.org/archive/</a></p>
<p>Their tagline is &quot;exploring the borderlands between reason and nonsense&quot;.  If you&apos;re skeptical of that claim, then you should probably be listening to this podcast!</p>
<h4 id="recodedecode">Recode Decode</h4>
<p><a href="http://www.recode.net/recode-decode-podcast-kara-swisher?ref=johnwittenauer.net">http://www.recode.net/recode-decode-podcast-kara-swisher</a></p>
<p>Kara Swisher grills various silicon valley elites about various tech topics.  Okay, it&apos;s not only that.  But it&apos;s MOSTLY that.</p>
<h4 id="revisionisthistory">Revisionist History</h4>
<p><a href="http://revisionisthistory.com/?ref=johnwittenauer.net">http://revisionisthistory.com/</a></p>
<p>Malcolm Gladwell did a ten-part series where he goes in-depth on random stories from the past and shows how the popular narrative around those events was either wrong or at least incomplete.  The show has been on hiatus for a while but worth spinning through the archive.</p>
<h4 id="wakingupwithsamharris">Waking Up With Sam Harris</h4>
<p><a href="https://www.samharris.org/podcast/full_archive?ref=johnwittenauer.net">https://www.samharris.org/podcast/full_archive</a></p>
<p>Sam is a really smart dude.  He invites lots of other really smart dudes on his show, and they spend a few hours going ludicrously in-depth on various philosophical, scientific, and political subjects.  In takes a special type of person to be into this sort of thing, but if you&apos;re that type of person, you won&apos;t find anything else like this anywhere else.</p>
<h4 id="startalkradio">StarTalk Radio</h4>
<p><a href="https://www.startalkradio.net/show/?ref=johnwittenauer.net">https://www.startalkradio.net/show/</a></p>
<p>Neil deGrasse Tyson, also known as the millennials&apos; Carl Sagan, is at his best in these entertaining, free-flowing discussions about various science or science-adjacent topics.  All sorts of interesting guests and very wide-ranging interviews.  Also, one of the comedians that frequents the show kind of sounds like the guy from Archer, which is pretty cool.</p>
<h4 id="talkingmachines">Talking Machines</h4>
<p><a href="http://www.thetalkingmachines.com/blog/?ref=johnwittenauer.net">http://www.thetalkingmachines.com/blog/</a></p>
<p>For hardcore machine learning nerds.  They do lots of interviews with researchers in various specializations.  It&apos;s pretty technical but very useful if you&apos;re an ML practitioner.  No new episodes in the last few months but I&apos;m keeping an eye on it.</p>
<h4 id="tedradiohour">TED Radio Hour</h4>
<p><a href="http://www.npr.org/programs/ted-radio-hour/?ref=johnwittenauer.net">http://www.npr.org/programs/ted-radio-hour/</a></p>
<p>This is one of my favorites.  Very high production quality, super-wide range of interesting topics.  Each hour-long show stitches together excerpts from several TED talks that share some common theme.  They also add some narration and frequently interview the TED speakers to add some color to the original talks.  Awesome series.</p>
<h4 id="theezrakleinshow">The Ezra Klein Show</h4>
<p><a href="http://www.vox.com/ezra-klein-show-podcast?ref=johnwittenauer.net">http://www.vox.com/ezra-klein-show-podcast</a></p>
<p>Ezra is a political journalist but the podcast isn&apos;t focused on politics (although some of his guests are politicians).  Just a lot of really good interviews with really smart people.  Good podcast hosts are able to frame questions in ways that guide the discussion in interesting directions, and Ezra is especially good at this.</p>
<h4 id="theknowledgeproject">The Knowledge Project</h4>
<p><a href="https://www.farnamstreetblog.com/the-knowledge-project/?ref=johnwittenauer.net">https://www.farnamstreetblog.com/the-knowledge-project/</a></p>
<p>Just recently discovered this one.  The stated goal is to focus on &quot;actionable strategies that help you make better decisions, avoid stupidity, and live a better life&quot;.  A lot of the interviews are geared towards reading and knowledge acquisition.</p>
<h4 id="thetimferrissshow">The Tim Ferriss Show</h4>
<p><a href="http://tim.blog/podcast/?ref=johnwittenauer.net">http://tim.blog/podcast/</a></p>
<p>Pretty much everyone knows who Tim is.  In addition to his famous &quot;4-hour&quot; books, he&apos;s one of the guys that really launched podcasting as a medium into the mainstream.  His tagline is &quot;deconstructing world-class performers&quot;.  A lot of the interviews are really good, although some could be edited down a bit.</p>
<h4 id="valueinvestingpodcast">Value Investing Podcast</h4>
<p><a href="http://valuepodcast.com/?ref=johnwittenauer.net">http://valuepodcast.com/</a></p>
<p>I&apos;m not gonna lie, these conversations are dry as hell.  But if you&apos;re serious about investing then there&apos;s a LOT of good information to learn here.</p>
<h4 id="voxstheweeds">Vox&apos;s The Weeds</h4>
<p><a href="http://www.vox.com/the-weeds?ref=johnwittenauer.net">http://www.vox.com/the-weeds</a></p>
<p>For policy nerds (yes, that&apos;s a thing).  Actually politics more broadly, but they spend a lot of time focused specifically on policy details (health care, taxes, education, and so on).  It isn&apos;t called &quot;The Weeds&quot; for nothing.  For some reason I find it surprisingly fascinating.  Useful if you want to talk circles around baffled family members at the next politically-charged Thanksgiving dinner.</p>
<p>Happy podcasting!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>