You are here:

Advanced Math/Deciding who to follow when experts have an argument

Advertisement


Question
QUESTION: My question is how to calculate probability when confronted by experts having an argument about a particular subject (in cases where there are only 2 possible answers, such as yes/no questions).
For example: A, B and C are all expert doctors. When each of them (individually) gives a diagnosis (in a yes/no question), the chance of accuracy is 90%, or 9/10. In a case where A and B argue for a certain treatment, and C argues against, what is the chance that A and B are correct? Originally I worked out that the answer is 9/10, but when constructing the general formula I realised that if 11 doctors were arguing against 10 doctors the answer would be the same - 9/10 - which is counter-intuitive and wrong. I then realised that the more doctors who are at loggerheads over a question indicates that this question is more difficult than the average one, in which case the chance of accuracy of all doctors goes down. (90% is only the success rate for the average question, not for the more difficult ones.)

My question is: given these facts, is there any way to give a formula which will tell me the probability of accuracy given the number of doctors on both sides of the issue (assuming that all doctors have the same expertise, that they answer 90% of questions correctly)?

If you could help me I would appreciate it!

ANSWER: Hi David,
Considering just the 'facts' you are stipulating in the paradigm we are looking at here, the solution seems a bit straightforward. You should know though that probability relies heavily on information, especially on the mechanics of the situation, the more you have the better it becomes.
Anyway, i'll start with a question for you to think about. What does a second opinion ever do for anyone?
The chances of one doctor giving a correct diagnosis is 90%. The chances of two doctors giving two correct independent diagnoses reduces to
0.9 x 0.9 = 0.81
= 81%
Almost seems like a second doctor makes you overall less certain than more assured by agreeing to a correct diagnosis? Note that here we're supposing we already know what the correct diagnosis is.
The chances of two doctors agreeing on a diagnosis (correct or not) is
0.9 x 0.9 + 0.1 x 0.1 = 0.81 + 0.01
= 82%
Following that line of thinking, adding a third doctor doesnt make the probability of a correct diagnosis better. It makes it either worse or far worse, with a disagreement. And so the probability of A and B being correct while C is wrong would be
0.9 x 0.9 x 0.1 = 8.1%
This seems small but you should realise that it is unlikely, in the first place, that a doctor with a 90% accuracy would get a diagnosis wrong. It should also be noted that this is the probability involving three specific doctors. The probability of only two out of any three doctors being correct is actually, from the binomial distribution;
3 x 0.9 x 0.1 = 24.3%
Now, the real answer you seek can be worked in the same manner using the binomial distribution. With 11 vs 10 doctors the probability would be very small and you will confirm what you said about more doctors bringing down the accuracy.

On a last note, i should mention that it is easy to overthink probabilities. Consider an archer who has a 90% chance of hitting a target. If he misses on a first attempt, he is very likely (in reality) to make adjustments to his shooting in the next attempt, decreasing his chances of missing. But without further information on his skill, experience or determination we are forced to keep considering his chances of failure on every attempt to be 10%. That is what i meant earlier about the mechanics of the particular problem and it can certainly apply to our doctors, too.

You can always get back to me.
Regards.

---------- FOLLOW-UP ----------

QUESTION: Thank you for replying to my question.

I agree that when 2 doctors are asked and give the same result. the probability that they are correct is 81/82. Or in other words: [(0.9)^2]/[(0.9)^2+(0.1)^2]. But if you ask a 3rd doctor and he agrees, then the probability is even higher - it is 729/730. In other words: [(0.9)^3]/[(0.9)^3+(0.1)^3].

If the 3rd doctor disagrees, the probability of the 2 being correct equals: [probability of 2 correct and 1 wrong]/[(probability of 2 correct and one wrong)+(probability of 1 correct and 2 wrong)]. In other words: [((0.9)^2)*(0.1)]/[((0.9)^2)*(0.1)+(0.9)*((0.1)^2)]. This equals 0.9.

The problem is that with 11 doctors against 10 doctors, the probability of the 11 doctors being correct is [((0.9)^11)*((0.1)^10)]/[((0.9)^11)*((0.1)^10)+((0.9)^10)*((0.1)^11)]. This also equals 0.9! Something is wrong here.

What do you think?

ANSWER: Hi David,
First, I'd like to say that its nice to see that you understand how successive opinions make us more assured about the accuracy of a diagnosis. You should now find it easy to follow the rest of the explanation.
What is most important here is to understand what happens to probabilities before and after events. Let's consider the hypothetical situation where there's a 70% statistical probability that it will rain today. We would continue to hold this view until today is over and the probability becomes 0% or 100% right after that. This might seem trivial but it will clarify the remaining of the explanation.
Another example is to consider 10 consecutive tosses of an unbiased coin. The probability of getting 10 heads is 0.5^10 which indicates it is very unlikely. Let us now suppose that after 9 tosses we've had all heads, what is now the probability of getting 10 heads? It is simply 0.5, which is quite achievable. So you see, the probability has changed drastically from 0.5^10 to just 0.5 due to the intermediate event of tossing 9 consecutive heads. Of course it is again very unlikely to toss those heads consecutively but once it has happened the likelihood doesn't matter anymore and we only look to the upcoming events. There are many other ways to look at this, including the chances of winning the lottery twice. It could be extremely unlikely in the beginning but if you manage to win once then the chances of winning again, though still daunting, isn't as gleam as it was when your goal was to win twice.

In my initial response I mentioned that the probability of only two out of any three doctors being correct, using the binomial distribution, is;
3 x 0.9 x 0.1 = 24.3%
This refers to the probability of it happening before the diagnoses are made. Once we get their responses and know that two of the doctors disagree with the last one then this probability has to be modified due to our added knowledge. This is referred to as conditional probability.
The probability of two doctors out of three disagreeing with the third (before the diagnoses) is, again, using the binomial distribution;
(3 x 0.9 x 0.1) + (3 x 0.9 x 0.1) = 27%
[Obviously this changes to 100% once the responses are in and we know that it is indeed the case. Basically, you have to keep a mental check on things to know which side of an event a probability refers to]
So, the (conditional) probability that only two out of three doctors are correct, after we know that two of them disagree with the third, is
24.3/27 = 90%
You have the correct answer but you haven't included the combinatorial coefficients in you calculations. It does cancel out in these kinds of situations but you cant always be sure. And of course it is wrong to state that the probability of only two out of any three doctors being correct is 0.9 x 0.1 instead of 3 x 0.9 x 0.1. The former would only be true if we were dealing with specific doctors A, B and C like I mentioned in my initial response, but we're dealing in general here.
In the case of 11 vs 10 doctors, the calculation follows the same route and we end up with a 90% chance of the 11 doctors being right after knowing that they disagree with the remaining 10. The intermediate numbers are very different though. For instance, the probability of 11 doctors out of 21 disagreeing with 10 (before the diagnoses) is
(21C11)*(0.9^11)*(0.1^10) + (21C11)*(0.1^11)*(0.9^10)
where 21C11 reads as "21 combination 11".
To again understand why the result is the same as for two doctors versus one, let us return to our coin tossing example and consider the probability of getting 100 heads in a row. It is 0.5^100. And, just like before, once we've somehow managed to achieve 99 heads in a row this probability reduces to just 0.5. In essence, it doesn't matter how many heads we've had in a row (9, 99 or 999999) the probability of a head (or that of a tail) in the next toss is always 0.5. What you should think about is that we've already done the hard job to get to that point and the last step is just as difficult as getting a head in one toss of the coin. 99 heads in a row is far more difficult to achieve than 9 but it wouldn't matter anymore once we're there, the probability is always 0.5 from there.
Back to our doctors, you can already sense from everything that's been said that the result would be the same when considering 1001 vs 1000 doctors. But you can also look at it in the following way; If at some point you have 1000 doctors on each side, then you have no reason to suppose that a particular side is correct since it is established in the paradigm that they are of equal expertise. Basically, you are back where you started. If, however, an additional doctor comes along then you have to consider him like you would if he were the only doctor in existence and agree that he is 90% accurate, just like whatever group he chooses.

Now, to derive a general formula to calculate this probability. Consider the situation involving n vs m doctors, each with accuracy p, the probability of only n out of any n+m doctors being correct is
[(n+m)Cn]*(p^n)*((1-p)^m)
the probability of n doctors out of n+m disagreeing with m (before the diagnoses) is
[(n+m)Cn]*(p^n)*((1-p)^m) + [(n+m)Cn]*((1-p)^n)*(p^m)
Therefore, the probability that the n doctors are correct, after we know that they disagree with the m doctors is
[(n+m)Cn]*(p^n)*((1-p)^m) / { [(n+m)Cn]*(p^n)*((1-p)^m) + [(n+m)Cn]*((1-p)^n)*(p^m) }
The combinatorial elements cancel out to become
(p^n)*((1-p)^m) / { (p^n)*((1-p)^m) + ((1-p)^n)*(p^m) }
dividing through by (p^m)*((1-p)^m), we get
(p^n-m) / { (p^n-m) + ((1-p)^n-m) }
and we can see that the probability depends only on the difference in the number of doctors and not their amount.
For n-m = 1, it becomes
p / [p + (1-p)]
= p / 1
= p
as we no doubt have had before.

I hope I have been able to help you.

Regards.

---------- FOLLOW-UP ----------

QUESTION: So are you seriously saying that on an issue where 10 experts disagree with 1, you would have the same confidence in the majority being correct as you would if 10,009 were arguing with 10,000?

Answer
Hi David,
Yes, if you've read and understood everything, you can see why that would be the case.
You have to understand that the chance of 10 experts disagreeing with 1 is very different from that of 10009 disagreeing with 10000. That is the essence. No matter how many times you've tossed a coin, be it once or a million times, you're just as likely to get a head or tail on the next toss. In reality, you may suspect a bias on the part of the coin and therefore conclude within yourself that a head is more likely than a tail on the next toss, but without further information on the fairness of the coin then the math is the math.
By the way, if you think about it, why would 10000 experts who each have a 90% chance of accuracy all get it wrong? If something seems odd to you then that is what you should be looking at.

Regards

Advanced Math

All Answers


Answers by Expert:


Ask Experts

Volunteer


Ahmed Salami

Expertise

I can provide good answers to questions dealing in almost all of mathematics especially from A`Level downwards. I can as well help a good deal in Physics with most emphasis directed towards mechanics.

Experience

Aspiring theoretical physicist. I have been doing maths and physics all my life.

Education/Credentials
I teach mathematics and engineering physics.

©2016 About.com. All rights reserved.