DaL answer is just exactly this. I'll illustrate it with a very simple example about... selling eggs.
You own an egg shop and each egg you sell generates a net revenue of $2$ dollars. Each customer who enters the shop may either buy an egg or leave without buying any. For some customers you can decide to make a discount and you will only get $1$ dollar revenue but then the customer will always buy.
You plug a webcam that analyses the customer behaviour with features such as "sniffs the eggs", "holds a book with omelette recipes"... and classify them into "wants to buy at $2$ dollars" (positive) and "wants to buy only at $1$ dollar" (negative) before he leaves.
If your classifier makes no mistake, then you get the maximum revenue you can expect. If it's not perfect, then:
- for every false positive you loose $1$ dollar because the customer leaves and you didn't try to make a successful discount
- for every false negative you loose $1$ dollar because you make a useless discount
Then the accuracy of your classifier is exactly how close you are to the maximum revenue. It is the perfect measure.
But now if the discount is $a$ dollars. The costs are:
- false positive: $a$
- false negative: $2-a$
Then you need an accuracy weighted with these numbers as a measure of efficiency of the classifier. If $a=0.001$ for example, the measure is totally different. This situation is likely related to imbalanced data: few customers are ready to pay $2$, while most would pay $0.001$. You don't care getting many false positives to get a few more true positives. You can adjust the threshold of the classifier according to this.
If the classifier is about finding relevant documents in a database for example, then you can compare "how much" wasting time reading an irrelevant document is compared to finding a relevant document.