“Artificial intelligence technology” is too often just a hot buzzword for marketing, a thing one sticks on product labels, like “new and improved.”
The hype obscures what is in fact a revolution in computing. AI is not one thing, but a movement to use vast computing power to find patterns. It will have many years of very vibrant development.
That should be very good for the companies that are the arms merchants in AI technology, particularly chip companies like Micron Technology (MU) and Xilinx (XLNX). A new form of computing is emerging, and it demands new chips.
The change is every bit as profound as the rise of micro-computing in the 1970s that made Intel a king of microprocessors. It makes Micron and Xilinx more important, but it will probably also lead to future chip stars that aren’t public now or may not even have been founded yet.
Some predictions made in that story have come to pass, such as Nvidia (NVDA) being a beneficiary. Its stock is up 770% since then. We also said Micron might be among the most valuable companies in the chip world, because AI can potentially use a lot more memory chips. The stock is up 270% since then.
It’s early in AI’s development; there may be more robust returns to come.
AI computing has been around for decades. What’s known as machine learning goes back to at least the 1970s. The problem with that kind of AI was that it involved human programmers laboriously coding explicit instructions for machines. They had to formulate rules for what aspects in a photograph, say, were essential to recognize an object, and then code all those rules into the computer. Machine learning moved at the speed that humans could formulate and code rules.
Then came the internet and cloud computing. Suddenly, internet giants had access to huge stores of data and computing power in their data centers. Computer scientists were able to create a new mode of machine learning, called deep learning, that didn’t require formulating explicit rules.
An example of deep learning is shown in the below graphic, in which a computer is taught to recognize pictures of cats. Millions of cat photos are fed into the computer, along with millions of examples of what’s not a cat, like a dog. The computer detects patterns of very basic shapes in the pixels of the image. It then discovers how those very basic shapes assemble regularly into discernible features that are relevant (pointy ears) and not relevant (floppy ears). That becomes a model for the computer to use to detect a cat in any new picture presented to it.
All the computing power Alphabet’s (GOOGL) Google and Facebook (FB) have at their disposal plays an important role. It allows the computer to go through the pattern detection over and over again, taking the output and using it as a new input, thus refining the model. That recursive form of discovery is known as back propagation and was never practical before.
Such deep learning is a powerful new paradigm: Set a basic objective, like classifying pictures, and let the computer discover the patterns that lead to a solution. The sequence of steps the computer goes through is called a network, and there are all kinds of different networks that are good at different things. For example, something called reinforcement learning uses very simple information, just a scenario and a set of rules, and figures out an optimal course of action.
That approach was used by Google in late 2017 to make a computer system called AlphaGo Zero. It learned to play the ancient Japanese game Go with nothing more than an initial arrangement of pieces on the game board, and a knowledge of the rules of the game. No examples were needed of how humans play. After 40 days of trying moves in millions of games against itself, AlphaGo Zero was able to defeat the previous computer program, which itself had defeated all the top human players.
These networks are in some cases not just learning what they’ve been assigned to learn, they are making new discoveries. As Google researchers observed of AlphaGo Zero, it found new approaches to the game.
“AlphaGo Zero may be learning a strategy that is qualitatively different to human play,” they wrote in Nature magazine.
The important part for tech companies is that these networks can be helped by new forms of computing devices. The math they perform, and the stages they go through, like backpropagation, require new machines. That has already bolstered the fortunes of Nvidia’s GPU chips, which are more efficient than Intel’s microprocessors at some kinds of math.
Demand for Micron memory chips has surged. Some kinds of networks function by storing the results of each operation in memory as they proceed through the many steps of pattern discovery. Micron executives last week identified a market for “data center” memory chips, the kind that could be used in deep learning, of $62 billion come 2021, more than double what it is today.
Deep learning is going to become much more widespread in computing. Over time, deep learning will take over other programming tasks from humans. Computer scientists have hardly begun that journey, which will require even more exotic chips and software. Investors should be looking far down the road.
|For more news you can use to help guide your financial life, visit our Insights page.|