Abstract
The linear Gaussian Bayesian network (GBN) is a commonly used graphical probabilistic model for continuous variables. An advantage of the GBN over alternative models is the use of the well-known Gaussians along with the simple linear models between the nodes, which make GBNs easy to understand and interpret. The structure and linear models for these networks can be specified by domain experts but also learned from data using multiple types of learning methods. In this researchwe concentrate on the score-based structure learningmethods as they are as efficient as others and easy to extend. However, the current learning methods for GBNs lack two important aspects: the existence of interaction effects among variables, and high-order powers among the variables. These aspects are frequently observed in fields such as social science, where the use of Bayesian networks is also popular.Our hypothesis is that a new learning method that takes these two aspects into account would provide better results regarding the underlying structure and learned models of a GBN. To test this hypothesis, we developed a learning algorithm along with several variants, which we applied to a wide range of synthetic datasets. To evaluate the performance of our methods, we compared the reconstructed models to a ground-truth model using metrics such as structural Hamming distance, log-likelihoods, and runtime for the analysis. Although we designed our algorithms to address these missing aspects, the results showed that our algorithms were not able to benefit from it. However, the algorithms proved to be useful in general, as they regulate the learning process of GBNs, especially on smaller datasets, resulting in better models compared to a default learning process.
Date of Award | 10 May 2023 |
---|---|
Original language | English |
Supervisor | Arjen Hommersom (Supervisor) & Stefano Schivo (Co-assessor) |
Master's Degree
- Master Computer Science