27th International Conference on Inductive Logic Programming
4-6 Sep 2017 Orléans (France)

Invited speakers

Marc Boullé : Automatic Feature Construction for Supervised Classification from Large Scale Multi-Relational Data

In large telecommunication companies like Orange, data are collected at a peta-byte scale, with a variety of domains ranging from network design, ergonomy, text and web mining to customer relationship management. Given the vast need for many data mining tasks, the issue is to propose methodologies and tools to industrialize as far as possible the data mining process. The raw data most often come with a relational structure, for example with customers in a main table and their call detail records (CDR) in a secondary table. This kind of data requires a heavy phase of data preparation, involving feature selection and construction. In Orange Labs, we have developed an approach to automate feature construction in the multi-relational data mining setting. In this setting, domain knowledge is specified by describing the structure of data by the means of attributes, tables and links across tables, and choosing construction rules. Mining relational data implies to be able to learn complex features aggregating properties of related objects. The space of features that can be constructed is virtually infinite, which raises both combinatorial and over-fitting problems. A prior distribution is introduced over all the constructed features, as well as an effective algorithm to draw samples of constructed features from this distribution. Extensive experiments show that the approach is robust and efficient, outperforms the state of the art and can deal with a nowadays large scale industrial problem. This approach is available in a tool named Khiops, widely used in Orange for mining large scale multi-relational databases. Data mining studies can now be completed in hours, not weeks.

Dr. Boullé is a Senior Researcher in the data mining research group of Orange Labs (formerly France Télécom R&D). His main research interests include statistical data analysis, data mining, especially data preparation and modeling for large databases. He developed regularized methods for feature preprocessing, feature selection and construction for multi-relational databases, correlation analysis, model averaging of selective naive Bayes classifiers and regressors.

Marc Boulle 

 

Alan Bundy : Can Computers Change their Minds?

Autonomous agents require models of their environment in order to interpret sensory data and to make plans to achieve their goals, including anticipating the results of the actions of themselves and other agents. These models must change when the environment changes, including their models of other agents, or when their goals change, since successful problem solving depends on choosing the right representation of the problem. We are especially interested in conceptual change, i.e., a change of the language in which the model is expressed. Failures of reasoning can suggest repairs to faulty models. Such failures can, for instance, take the form of inferring something false, failing to infer something true or inference just taking too long. I will illustrate the automated repair of faulty models drawing both on work multi-agent planning and on the evolution of theories of physics.

Keywords: representation, inference, faulty models, automated repair and evolution.


Alan Bundy is Professor of Automated Reasoning at the University of Edinburgh. He is a fellow of the Royal Society, the Royal Academy of Engineering and the Association for Computing Machinery. He was awarded the IJCAI Research Excellence Award (2007), the CADE Herbrand Award (2007) and a CBE (2012). He was Edinburgh's Head of Informatics (1998-2001) and a member of: the Hewlett-Packard Research Board (1989-91); both the 2001 and 2008 Computer Science RAE panels (1999-2001, 2005-2008). He was the founding Convener of UKCRC (2000-2005) and a Vice President of the BCS (2010-12). He is the author of over 290 publications.

Alan Bundy



Jennifer Neville Learning from single networks---the impact of network structure on relational learning and collective inference

Network science focuses on analyzing network structure in order to understand key relational patterns in complex systems. In contrast, relational learning typically conditions on the relations in an observed network, using them as a form of inductive bias to constrain the space of dependencies (among entities) considered during learning. While recent interest in these two fields has produced a large body of research on models of both network structure and relational data, there has been less attention on the intersection of the two fields--specifically considering the impact of network structure on relational learning methods. Since many relational domains comprise a single, large, partially-labeled network, many of the conventional assumptions in relational learning are no longer valid and the network structure creates unique statistical challenges for learning and inference algorithms. In this talk, I will discuss the complex interaction between local model properties, global network structure, and the availability of observed attributes that occurs in templated relational models rolled-out over a single large network. By understanding the impact of these interactions on algorithm performance (e.g., learning, inference, and evaluation), we can develop more accurate and efficient analysis methods for large network datasets.


Jennifer Neville is the Miller Family Chair Associate Professor of Computer Science and Statistics at Purdue University. She received her PhD from the University of Massachusetts Amherst in 2006. She is currently an elected member of the AAAI Executive Council and she was recently PC chair of the 9th ACM International Conference on Web Search and Data. In 2012, she was awarded an NSF Career Award, in 2008 she was chosen by IEEE as one of "AI's 10 to watch", and in 2007 was selected as a member of the DARPA Computer Science Study Group. Her research, which includes more than 100 published papers with 5000 citations, focuses on developing data mining and machine learning techniques for complex relational and network domains, including social, information, and physical networks.

Jennifer Neville

 

Mathias Niepert : Learning Knowledge Base Representations with Relational, Latent, and Numerical Features

The importance of knowledge bases (KBs) for AI systems has been demonstrated numerous times. KBs provide ways to organize, manage, and retrieve structured data and allow AI system to perform reasoning in various domains. In my talk, I will discuss the strengths and weaknesses of various feature types typically occurring in knowledge bases (relational, numerical, and visual) and I will present novel methods for combining diverse features types into joint machine learning models for query answering. These models are not only accurate and efficient on standard knowledge base completion tasks but also support completely novel query types such as queries that involve images. I will also present several data sets (featuring numerical and visual data) which we hope some fellow researcher might find helpful.


Mathias Niepert is a senior researcher at NEC Labs Europe in Heidelberg. He received a PhD degree from Indiana University, USA. He was a research associate at the University of Washington, Seattle and the University of Mannheim, Germany. Since 2015 he has been a senior research scientist at the Networked Systems and Data Analytics group. Mathias has published over 40 papers in leading conferences, journals and workshops including ICML, NIPS, AAAI, IJCAI, and UAI. He has won several best paper awards, a Google faculty research award, and has organized workshops in the area of statistical relational learning.

Mathias Niepert 
Online user: 1 RSS Feed