Many applications have benefited remarkably from low-dimensional models inthe recent decade. The fact that many signals, though high dimensional, areintrinsically low dimensional has given the possibility to recover them stablyfrom a relatively small number of their measurements. For example, incompressed sensing with the standard (synthesis) sparsity prior and in matrixcompletion, the number of measurements needed is proportional (up to alogarithmic factor) to the signal's manifold dimension. Recently, a new natural low-dimensional signal model has been proposed: thecosparse analysis prior. In the noiseless case, it is possible to recoversignals from this model, using a combinatorial search, from a number ofmeasurements proportional to the signal's manifold dimension. However, if weask for stability to noise or an efficient (polynomial complexity) solver, allthe existing results demand a number of measurements which is far removed fromthe manifold dimension, sometimes far greater. Thus, it is natural to askwhether this gap is a deficiency of the theory and the solvers, or if thereexists a real barrier in recovering the cosparse signals by relying only ontheir manifold dimension. Is there an algorithm which, in the presence ofnoise, can accurately recover a cosparse signal from a number of measurementsproportional to the manifold dimension? In this work, we prove that there is nosuch algorithm. Further, we show through numerical simulations that even in thenoiseless case convex relaxations fail when the number of measurements iscomparable to the manifold dimension. This gives a practical counter-example tothe growing literature on compressed acquisition of signals based on manifolddimension.