Monday, December 18, 2017

What you gonna do when they come for you?

Artificial Intelligence (AI) is the focus of modern research in information technology. The successes of deep learning in narrow tasks like voice and image recognition and processing made the industry confident that in the near future we will see an AI breakthrough and every software system will integrate an AI subsystem both for the human-computer interface and for the actual information processing. The future will be characterized by AI personal assistants, self-driving cars, computer tutors for university courses, robot surgeons, digital fashion designers and writers and the list goes on since it seems that it is limited only by our imagination and not by our technology.

To most people all the above seem both futuristic and utopical. IT professionals who work in companies that have dedicated teams for machine learning (ML) and data processing get a glimpse of what is coming but mostly they are small applications in narrow sections of their business domain like autocompletes, suggestions or data categorizations. Even for them large scale AI is something that is done somewhere far far away.

And then, all of a sudden, AI knocks your door. For me it happened last week when was announced a new chess engine and a new way to create indices for databases. In these two domains, chess and computer science, I have devoted much time and work to get some expertise so the thought that now ML conquered them was at first shocking but when I relaxed and gave it a proper thought, I become excited for the upcoming future.

Chess

A new engine, AlphaZero, defeated the previous engine world champion Stockfish 28-0 with 78 draws. The news received great publicity in chess sites like chessbase and chess.com Top GMs like Vishy Anand the 15th World Champion reacted to the news. But what was so special about the new engine?

AlphaZero is the successor of AlphaGo a machine for the game of Go, which beat the human world champion last year and that caused sensation as the game of Go was regarded too complex for a computer. But AlphaGo not only became capable of playing but did it with reinforcement learning a subdomain of machine learning. The machine was latter tuned and trained for chess, learned to play in 4 hours and reached world class level at about a day using again reinforcement learning. To oversimplify the process, the machine started with absolutely no knowledge of chess, played games against itself, learned from the mistakes and evolved. During the match with Stockfish it didn't use any databases of chess games and rules. Traditional chess engines use brute force algorithms to prune the tree of variations. They also employ large database of openings and endings for help. Until now researchers believed that a machine cannot learn chess with ML. AlphaZero proved that this was a prejudice.

Does the existence of such a strong engine implies that the game is now dead. Absolutely not! More people play the game today that ever despite that there exist strong chess engines that even the top grandmasters cannot beat. The new machine with the novel approach to chess opens new ways for a better understanding of the game.

The usual way that players use a chess engine is as follows: they enter a game into a board and they have the engine run in the background. When the engine finds an error it popups a notification window saying "You moved the bishop to c5, you should have moved the rook to c1" and that's all. Depending on the level of the player he may or may not understand why the bishop move was wrong and why the rook move is better. People think in terms of plans, time, space, center and other chess terms. The chess engine cannot make an explanation with words, it can just say good moves. Now let's have a look at a random chess analysis from the internet like this. As you can see there are phrases in English explaining the moves and the plans. Imagine if we can feed these analyses to AlphaZero. Using the chess engine and a system for natural language processing it could combine words and moves and produce analyzed games for human with comments like "The rook move is wrong it looses time. Better move the knight to organize an attack in the center". This is a good case of embracing the machine. Players should analyze their games, in other words create more data for the machine, submit them to the engine, reflect on the feedback and start over again. This would help us better understand both the machine and the game and certainly won't prohibit us from playing chess.

Computer Science

Last week I saw the slides of an interesting talk by Jeff Dean the head of Google Brain, google's AI sector. In this excellent talk there was a section on "Learned Indexed Structures" with a link to a paper. There they present experimental results for replacing traditional search structures like B-trees, hash tables and bloom filters with machine learning models, and the results are indeed astonishing as they get the same correctness semantics with a performance speedup.

Index structures are used to speedup access to a large dataset. Suppose you have a large set of records of books. You can create an index structure for the authors in a preprocessing step and then be able to quickly retrieve all the books of an author without having to check every record. Traditional index structures like B-trees are using heuristics and smart algorithms underneath and are implemented traditionally as code. Their characteristic is that they are context unaware, not adaptable to data and they try to handle even the worst cases where the data distribution is not suitable for the algorithm.

The learned index takes a novel approach. It views the index as a model to be trained with the data. It uses a small portion of the data to generate and train the model which is subsequently used as the index and facilitates queries and updates. Another view of the above procedure is that it computes the cumulative distribution function of the data set. Then a query for record r will use P(r) to find the record in the data set. To summarize: you don't invent algorithms or configure anything. You use your data to train a predefined model which you use for indexing. The paper describes all the details and has all the background on the research but for now just note that a learned index is an ML model trained on your data and not a complex algorithm that tries hard to rebalance the internal search trees to maintain the performance semantics.

Suppose now that the above becomes mainstream and eventually databases come with such indices. What about our unnecessary expertise on understanding and using indices? Do we become obsolete as engineers? Of course not! Instead we are evolving. Nowadays because our building blocks are algorithms we think using keys, partitions and types. Our abstractions and flows are based on these building blocks. In the - near? - future we will think using models and their composition. Their advantage is that models hide the implementation details like primary and foreign keys and column types. They also compose easily in arbitrary ways. For example we won't have to specify joins and subselects, we will only have to make the appropriate query and the engine underneath will combine the models to answer the query. More important is that models can be shared and that will lead to libraries of models. That will make learning easier as software will be distributed alongside data models that operates on. Imagine for example containers with database engines tailored for particular data sets, vms with data pipelines optimized for selected tasks or even whole cloud deployments tuned for our applications. No more configuration, parameter tuning, setup scripts etc. We may take this a bit further and get software by presenting the data not the other way as we do today. We are making the first steps to escape the world of technological fetters for a true information age where we are working only and for the pure information.

Aftermath

AI evolves with a geometric pace and eventually will enter the comfort zone of everybody. But it is not something to worry about. Every new advance, surely makes old knowledge obsolete but it also opens new ways for exploration and usage. The future is very promising both for our work as IT professionals and as digital citizens.

PS I will be happy to meet you at RetroCon 2031 and play a game of chess with you after my talk on implementing secondary indices with java.




No comments:

Post a Comment