This post is by UT Austin grad student Elliot Meyerson
I spent February 12-17 in Phoenix at the 30th AAAI Conference on Artificial Intelligence. AAAI covers artificial intelligence “broadly conceived”, with focus on “traditional topics such as search, machine learning, planning, knowledge representation, reasoning, natural language processing, robotics and perception, and multiagent systems.” Evolutionary Computation (EC), a large subfield of AI, tends to be underrepresented at mainstream AI conferences like AAAI; rather EC is featured at more specialized venues such as GECCO, PPSN, and CEC. As an EC researcher, attending AAAI gave me a chance to update my contextualization of EC in the broader field, and find inspiration in places I don’t normally look; a few things stood out in particular.
EC covers modeling and optimization techniques inspired by natural evolution. Generally, EC systems maintain a population of individuals encoded as possible solutions to a computational problem or task. Individuals are selected for recombination and mutation based on their fitness, a value corresponding to the success of the individual in the task. Intuitively, evolution will naturally progress to produce individuals that maximize this fitness. In this form, fitness is the only information collected from the task, which is otherwise considered a black-box.
A variety of black-box optimization techniques aside from EC populate the AI community. At AAAI, I found only two papers that explicitly used EC, including the one I presented [1,2]. Other authors were interested in theoretical results that could help bridge the gap between EC and the more theory-oriented AI community. Their papers considered derivative-free algorithms, that use operations analogous to mutation, but forsake crossover, enabling formal proofs of their probabilistic approximation abilities (under certain assumptions about the black-box) [3,4].
Another black-box approach with nice theoretical properties and even more prevalent at AAAI was Bayesian optimization (BO). Given all the individuals evaluated so far, instead of generating new individuals through mutation and crossover, BO always generates the individual that maximizes some probabilistic acquisition function, often the expected improvement in the task. Under certain assumptions, BO is theoretically guaranteed to converge to the true global optimum, a quality many EC researchers only dream of. Also, in EC, maintaining and understanding diversity is an important goal; some BO and other techniques at AAAI also made use of the popular formalization of submodularity to prove the approximate efficacy of a diverse collection of solutions. BO was showcased in an invited talk by Andreas Krause, in which he demonstrated its applicability to robotics and protein design, in which BO outperformed previously tried EC approaches . It was encouraging to see the success of such automation in design for synthetic biology, a domain I am interested in pursuing.
However, I believe synthetic biology domains are among many for which optimization can be qualitatively advanced by automation techniques that go beyond the black-box, incorporating additional information about the task, the behavior of individuals in the task, or features of the evolutionary process. As an example, novelty search is an approach to escape local optima by rewarding individuals that demonstrate novelty with respect to what they do on their way to receiving their fitness score. Along with providing a powerful search methodology, insights gained in development and analysis of novelty search can then help inform our understanding of natural evolution, e.g., with respect to the evolution of cognition  and the utility of extinction .
At AAAI, beyond-black-box techniques arose in the domain of algorithm configuration (AC), particularly in an excellent tutorial led by Frank Hutter. AC is the problem of automatically optimizing the settings of an algorithm for a particular application. In algorithms across optimization and machine learning, the choice of settings can have huge effects on the algorithm’s behavior and performance. AC works by testing out various settings when applying an algorithm to a set of task instances, and using the results to select settings that will hopefully perform well on future tasks. For EC researchers, AC can be a very useful tool, as EC algorithms often have several parameters, whose optimization can provide both improved performance and insight into how each evolutionary mechanism practically affects evolution across a variety of tasks. The most popular AC techniques use BO , and there are competitive EC approaches as well . Beyond the black-box, some techniques used extensive analysis of task features to predict which settings will be optimal for novel classes of future tasks. Hutter also suggested using learning curve information to preemptively terminate unpromising experiments. Like protein design, algorithm configuration consists of fixing an initial state, and then watching a complex hard-to-predict temporal process unfold, which leads to the fitness output. There is a ton of additional information produced by this temporal process that can be potentially harnessed to improve search.
I was very inspired by BO and AC, and have already started messing around with one BO-based AC tool, Spearmint, and working towards using it in my research :). Due to differences in technical details and language, it can often be difficult to take advantage of related research in disparate fields. All in all, an immersive week in the world of AAAI proved to be an effective recontexualization opportunity, aka shake-up. I’m sure there’s such a thing as too many shake-ups, but I think most of us probably don’t get enough of them. Also, in Phoenix the hiking was good.
Some other unrelated but fun stuff from AAAI for researchers:
Semantic Scholar. From Allen Institute on AI. Goal: make lit-search more efficient
Auto-poster-generation : I will never manually make a poster again (note: the poster presenting this paper was not auto-generated).
 Braylan, A.; Hollenbeck, M.; Meyerson, E.; Miikkulainen, R. Reuse of Neural Modules for General Video Game Playing. AAAI 2016.
 Singla, A.; Tschiatschek, S.; Krause, A. Noisy Submodular Maximization via Adaptive Sampling with Applications to Crowdsourced Image Collection Summarization. AAAI 2016.
 Yu, Y.; Qian, H.; Hu, Y. Derivative-Free Optimization via Classification. AAAI 2016.
 Qian, H.; Yu, Y. Scaling Simultaneous Optimistic Optimization for High-Dimensional Non-Convex Functions with Low Effective Dimensions. AAAI 2016.
 Romero, P. A.; Krause, A.; Arnold, F. H. Navigating the protein fitness landscape with Gaussian processes. PNAS 2013.
 Lehman, J.; Miikkulainen, R. Overcoming Deception in Evolution of Cognitive Behaviors. GECCO 2014.
 Lehman, J.; Miikkulainen, R. Extinction Events Can Accelerate Evolution. PLOS ONE 2015.
 Snoek, J.; Larochelle, H.; Adams, R. P. Practical bayesian optimization of machine learning algorithms. NIPS 2012.
 Ansótequi, C.; Sellman, M.; Tierney, K. A gender-based genetic algorithm for the automatic configuration of algorithms. CP 2009.
 Qiang, Y.; Fu, Y.; Guo, Y.; Zhou, Z.; Sigal, L. Learning to Generate Posters of Scientific Papers. AAAI 2016.