|M.Sc Student||Eyal Masad|
|Subject||Convergence of Learning Processes in Uncertainty Spaces|
|Department||Department of Mathematics||Supervisor||Professor Emeritus Reich Simeon|
Most of the mathematical models of learning processes are based on linear transformations in normed spaces. However, many learning processes are nonlinear and do not require any structure, be it linear or not.
In this thesis I present a model which is not limited to any particular structure except for a measurable space.
The information which is to be learned is a partition of the space. The process begins with a given partition and gradually finds the target partition by iterating through the following steps:
Produce an experiment; get a feedback (from a given feedback function); draw conclusions; and update the partition.
The updating of the partitions is done by special functions defined on the s-algebra which are called s-algebra endomorphisms. I have shown that such functions are induced by measurable functions from the measurable space into itself.
Defining a probability measure on the measurable space, we use Shannon's entropy function to measure the uncertainty induced by each partition and thus the amount of information needed to be learned when passing from one partition to another.
We then define a metric on the space of partitions, which describes the gaps of information between each two partitions. The metric space formed in this way is called an uncertainty space.
In this thesis I construct learning processes that converge (in terms of information), under the assumption that the amount of information that is to be learned is finite. This last assumption is a necessary condition for the convergence of learning processes.
I also give several algorithms which rely on very primitive feedback functions. Again convergence depends on the finiteness of the information that is to be learned.