This work addresses stochastic optimal control problems where the unknown state evolves in continuous time while partial, noisy, and possibly controllable measurements are only available in discrete time. We develop a framework for controlling such systems, focusing on the measure-valued process of the system's state and the control actions that depend on noisy and incomplete data. Our approach uses a stochastic optimal control framework with a probability measure-valued state, which accommodates noisy measurements and integrates them into control decisions through a Bayesian update mechanism. We characterize the control optimality in terms of a sequence of interlaced Hamilton Jacobi Bellman (HJB) equations coupled with controlled impulse steps at the measurement times. For the case of Gaussian-controlled processes, we derive an equivalent HJB equation whose state variable is finite-dimensional, namely the state's mean and covariance. We demonstrate the effectiveness of our methods through numerical examples. These include control under perfect observations, control under no observations, and control under noisy observations. Our numerical results highlight significant differences in the control strategies and their performance, emphasizing the challenges and computational demands of dealing with uncertainty in state observation.