Wednesday, 28 October 2015

On Decisions and Learning (Part II)

In the previous post, we have seen how the space and time constraints can affect the decision  making.  In this part, we extend the model to one of practical interest. Such a model should be robust across space and time variations. In effect we cover up the space and time and replace them by agents/players who affect our decision process. To illustrate with an example we take the case of a boy in say, Brazil and a similar aged boy in India. It is more probable that the Brazilian boy latches himself to football and the Indian boy would choose cricket with higher probability.

                     

The agents here is a representation of the culture and tradition - hence depends on the space. There can be agents representing time as well. For instance the decision on taking an umbrella before leaving the home depends on whether it is the rainy season or not. Rain is the agent here and hence represents time. Thus agents are the physical representation of the time-space (real) parameters. Another aspect of agents is that, there are certain agents that aid to the decision with more weight than certain other. Now there are decisions taken in accordance with logic, past experience, and predictions. Each of them has a correspondence with time - present, past and the future.

The process of making the decision can be summed up as \( f : (A_1,A_2, \cdots, A_k) \rightarrow  \mathcal{D} \). Here \(A|_1^k\) are the agents to be considered while taking the decision and the set \(\mathcal{D}  = d_1,d_2, \cdots, d_n\) are the decision choices. This model lacks the past learning that has been accumulated over time. To explain the learning in decision making, we can take the example of the man going out while it is raining (Agent = Rain). The man does not take the umbrella, and gets wet (Decision : No Umbrella). The next time he goes out in the rain he is reminded by the fact that he has got wet (which assumes a cost). Now it is possible to assign a cost of for the decision taken over a action set. Assume that the decision maker has learnt the decision taken over a set of agents till time \(T\). Now the system is opened and the learned costs are considered while taking the decision.

No comments:

Post a Comment