You may view accuracy as the $R^2$ of classification: an initially appealing metric with which to compare models, that falls short under detailed examination.
In both cases overfitting can be a major problem. Just as in the case of a high $R^2$ might mean that you are modelling the noise rather than the signal, a high accuracy may be a red-flag that your model applied too rigidly to your test dataset and does not have general applicability. This is especially problematic when you have highly imbalanced classification categories. The most accurate model might be a trivial one which classifies all data as one category (with the accuracy equal to proportion of the most frequent category), but this accuracy will fall spectacularly if you need to classify a dataset with a different true distribution of categories.
As others have noted, another problem with accuracy is an implicit indifference to the price of failure - i.e. an assumption that all mis-classifications are equal. In practice they are not, and the costs of getting the wrong classification is highly subject dependent and you may prefer to minimise a particular kind of wrongness than maximise accuracy.