“School of Cognitive”
Back to Papers HomeBack to Papers of School of Cognitive
Paper IPM / Cognitive / 8801 |
|
||||
Abstract: | |||||
Representing and modeling knowledge in the face of uncertainty has always
been a challenge in artificial intelligence. Graphical models are an apt way of
representing uncertainty, and hidden variables in this framework are a way of
abstraction of the knowledge. It seems that hidden variables can represent
concepts, which reveal the relation among the observed phenomena and
capture their cause and effect relationship through structure learning.
Our concern is mostly on concept learning of situated agents, which learn
while living, and attend to important states to maximize their expected reward.
Therefore, we present an algorithm for sequential learning of Bayesian
networks with hidden variables. The proposed algorithm employs the recent
advancements in learning hidden variable networks for the batch case, and
utilizes a mixture of approaches that allows for sequential learning of
parameters and structure of the network. The incremental nature of this
algorithm facilitates gradual learning of an agent, through its lifetime, as data
is gathered progressively. Furthermore inference is made possible, when
facing a large corpus of data that cannot be handled as a whole.
Download TeX format |
|||||
back to top |