It is desired to minimize the expected cost of finding an object which moves back and forth between two location according to an unobservable Markov process. When the object in in location $i$ $(i=1,2)$ it resides there for a time which is exponentially distributed with parameter $\lambda$, and then moves to the other location. The location of the object is not known and at each instant until it is found exactly one of the two locations must be searched. Searching location $i$ for a time $\delta$ costs $c_i\delta$ and conditional on the object being in location $i$ there is a probability of $\alpha_i\delta+o(\delta)$ that this search will find it. The probability that the object starts in location 1 is known to be $p_i(0)$. The location to be searched at time $t$ is to be chosen on the basis of the value of $p_1(t)$, the probability that the object is in location 1 given that it has not yet been discovered. We prove that there exists a threshold $\Pi$ such that the optimal policy may be described as: {\em search location 1 if and only if the probability that the object is in location 1 is greater than $\Pi$}. Expressions for the threshold $\Pi$ are given in terms of the parameters of the model.