Curious Insight

Technology, software, data science, machine learning, entrepreneurship, investing, and various other topics


Curious Insight

TensorFlow, OpenAI, And The State Of Open Source ML

2nd February 2016

We've seen a number of really interesting developments occur in the machine learning world over the last few months and I wanted to spend some time teasing out a few ideas I've been thinking about in relation to these announcements. To recap, the first major development came in November when Google announced that they were open-sourcing their second-generation machine learning system, Tensorflow. About a month later, Facebook responded by open-sourcing their deep learning hardware design. And around the same time we saw Elon Musk, Sam Altman and other Silicon Valley elites announce the formation of OpenAI, a non-profit research organization with $1 billion in capital to advance the state of artificial intelligence.

There are a number of different ways to look at these seemingly unrelated events. I think it would be interesting to try to parse out 1) why each move was made, 2) whether or not the move makes sense strategically for those involved, and 3) what the long-term impact will be on open-source machine learning, and perhaps on the overall field of artifical intelligence.

First let's look at Tensorflow. Machine learning is really important to Google and they probably have the best team on the planet dedicated to advancing the state of the art in this area. So on the surface, it appears like Google is giving away something really valuable that lessens their competitve advantage. After all, couldn't Facebook or Microsoft or some random startup use this software to build some awesome machine learning apps that compete with Google? Well...not exactly. It's important to remember that there are three critical components to a large-scale machine learning project. Algorithms/software are one of those pieces, but you also need lots of data and lots of infrastructure to run it on, two areas where Google also has a huge competitive advantage over most firms. Data and infrastructure are also MUCH harder and/or expensive to acquire at massive scale.

But still, why give away ANY of their competitive advantage, even if it's the easiest part to replicate? I think there are a number of advantages Google gets out of this model that likely outweighed any downside. Open-sourcing their software allows contributors outside of Google to fix bugs, add new features and improve the software basically for free, which benefits Google. It also gives new up-and-coming talent a chance to gain experience using Google's toolset on their own time, effectively pre-training them for Google to then hire and drop into an engineering team ready to go (this is further supported by the fact that Google just released a class on Udacity for this very purpose). Finally, the open model is very appealing to top researchers and engineers so it's a powerful recruiting draw as well.

It's also important to remember that Google did NOT give away all of their algorithmic advantage. Notably absent from the release is anything pertaining to the work that the DeepMind team is doing on reinforcement learning. In fact, by most accounts that do not even use Tensorflow, they're still running an in-house version of Torch. Tensorflow, while still novel and a significant contribution, is closer to being a better Theano than it is to some revolutionary new technology unleashed on the world. The algorithms baked into the software are already widely known and understood. The real value is in the engineering that runs those algorithms cross-platform and at scale across a wide range of devices. That's not a trivial feat by any stretch, but it also isn't something that only Google could do. Given all of the benefits that Google gets from open-sourcing it relative to what they're giving up, it was likely a very easy decision.

A similar argument could be made with Facebook's release of their deep learning server design. Facebook's competitive advantage is their data, not their server designs. And it's debatable who would even benefit from specs like this. Is a typical ML startup going to invest tons of capital on servers with dozens of video cards in it to build deep learning models? Nope, they're going to fire up AWS instead. Is it possible that Google or Microsoft could look at these plans and go "hey, there are some good ideas here, let's copy what they're doing"? Maybe, but it's probably unlikely. The designs could make their way into some university labs, but that doesn't really pose a threat to Facebook. Again, there's very little competitive advantage in design specs for a commoditized resource.

The benefits for Facebook, however, are similar to those for Google, particularly around attracting talent. I don't think it's a coincidence that Google, Facbook, and OpenAI all had major announcements in the span of a few weeks. These guys are basically at war with each other to get the top researchers and engineers on their teams, and their strategies around "openness" are a key factor in luring those individuals. I don't think it would be a stretch to argue that the MOST important factor for success in large-scale, bleeding edge machine learning is the people on the team. More important than software, more important than data, more important than anything. When viewed through this lens their actions make a LOT of sense, because giving away stuff that others might consider valuable is a distant second to getting the best people to come work for them.

Which brings us to OpenAI. Their announcement was interesting in that it sounded very grandiose and far-reaching but lacked many specifics other than defining the initial team. The basic idea is that a bunch of wealthy technologists and venture capitalists and getting together and promising $1 billion to build a team to work on various problems in AI and give away all of their output for free. We don't really know yet what this output will look like but it could be new research, software tools, data, etc. On the surface this seems like an incredibly altruistic thing to do. And if the group of founders really, truly believes that human-level AI is simultaneously an existential threat to humanity and a massive growth opportunity, then it's possible (though debatable) to reach the conclusion that developing this technology in the open is the logical next step.

But it's ALSO possible, maybe even likely, that this is a really smart strategy move from a group of individuals that have a vested interest in giving startups an opportunity to compete in this arena. Imagine that you're a venture capitalist funding AI startups. In today's world you have to compete with the likes of Google, Facebook etc. on talent, technology, data, resources, you name it. It's a pretty daunting proposition. But what if you could put together a group that could negate some of the incumbents' advantages by developing the same technology in the open, effectively commoditizing a portion of it? And in the process you even manage to hire away some of their key talent, thus further weakening the incumbents. The output from OpenAI could become a "building block" of sorts that allows startups to build competitive AI-based products without hiring all the researchers and engineers necessary to start from scratch. If this turned out to be the case, it's concievable that the value unlocked by that output could greatly exceed the $1 billion investment, with at least some of the value going back to the original investors.

I'm not saying that this is the OpenAI founders' true motivation, because I have no idea. But it would be foolish to think they haven't considered it. There's no reason that OpenAI can't be both a genuine attempt to get a step ahead in the hypothetical AI arms race AND a pretty smart strategy move. I do not, however, buy the notion that OpenAI is purely meant to counter the so-called "killer robot" scenario that Elon Musk has talked about - the idea that a super-intelligent machine could escape the control of its creators and eventually wipe out humanity.

While it's possible that we will - at some point - create a sentient, artifical general intelligence that rivals or surpasses human capability, I do not think it's as imminent a theat as some seem to believe. The general consensus among those actually doing research on the front lines seems to be that these concerns are overblown. Andrew Ng probably stated it best when he said "I don’t work on not turning AI evil today for the same reason I don't worry about the problem of overpopulation on the planet Mars". The implication is that it's possible that someday we will have to concern ourselves with this scenario, but we're so far away from building a truly sentient machine that this just isn't a problem than anyone can productively work on. There's a slightly better argument to be made for things like autonomous weapons, which are probably closer to becoming reality than sentient machines. But it's unclear how OpenAI would do anything to solve that problem.

At the end of the day, it doesn't really matter what the group's motivations are. Just as with the Google and Facebook announcements, the end result is that great technology is given away for free to anyone with the ability to use it. This is obviously a good outcome for open source advocates and individuals or startups interested in using this technology. More broadly, I think these announcements represent the continuation of a trend that has been underway for a while but continues to accelerate. Big tech firms are increasingly adopting the open source model for portions of their technology stack, and in doing so are becoming a driving force in the open source landscape. This effect is particularly noticable in functional areas where expertise is highly sought-after because significant open source contributions can boost a firm's reputation with the community.

This is really important because there's a different dynamic at work with company-driven open source projects than there is with community-driven projects. The latter are much more organic and free-flowing but also tend to lack the focus and cohesion of a major corporate effort (this is certainly not true in all cases but is probably true on average). The open-source community has produced amazing things and will continue to produce amazing things. But it's not clear to me that the open source community could have built TensorFlow.

What does all of this mean for the state of open source machine learning? I think the end result is that the quality and capability of open source machine learning tools will increase much faster than they would organically. This is probably already true, but it will become much more evident. We could get to a point where certain parts of the stack are effectively standardized/commoditized, much like AWS has commoditized the infrastructure needed to conduct experiments (TensorFlow may already be this for tensor math). Looking further down the road, I think the tools will become much more sophisticated as they build on top of each other, providing higher and higher levels of abstraction as companies master and then open-source new technologies.

It's really fascinating to take a step back and look at the current state of things. We're effectively seeing a market where the rational behaviour is to spend lots of time and effort building something incredibly useful, and then giving it away. This was probably unthinkable just a decade ago, but things have changed quickly. It would be fascinating to analyze how we arrived at this point, but ultimately it's not that important. What matters is that the value created by this trend is probably going to be huge. When competitive market forces drive big corporations toward a model of "openness", everyone wins.

Follow me on twitter to get new post updates.

StrategyMachine LearningCurious Insights

Data scientist, engineer, author, investor, entrepreneur