A = .20 B = .30 C = .30 D = .20 
The logical fallacy in this scenario is contained in the phrase "Therefore—it's a fit!" The nonsignificant test result in this case would allow the investigator to conclude that this particular sample provides no evidence contradicting his hypothesis. But the absence of evidence contradicting an hypothesis is not at all the same thing as positive evidence in support of an hypothesis. The point of this not too farfetched scenario is that chisquare is a test of rather low power; its ability to reject the null hypothesis, even when the null hypothesis is patently false, is quite weak. And the smaller the size of the sample, the weaker it is.
In both of the following simulations, pseudorandom numbers are drawn and shaped in such a way as to ensure that the actual proportions within the imaginary muglout population are not
A = .20 B = .30 C = .30 D = .20 but rather A = .25 B = .25 C = .25 D = .25 
In the first simulation, random samples of size n are drawn from the population one sample at a time. With df=3, the critical value of chisquare for significance at or beyond the 0.05 level is 7.815; hence, any calculated value of chisquare equal to or greater than 7.815 is recorded as "significant," while any value smaller than that is noted as "nonsignificant." The default value of n is set at 60 to correspond to the scenario described above. To simulate this scenario, click the "Run Simulation 1" button and note the results; then do the same thing 15 or 20 times over. And recall as you are doing all this that here is a situation where the null hypothesis is patently false. The greater the power of the test, the greater will be the percentage of results that turn up as "significant"; the lower the power, the more often you will end up with "nonsignificant." For a sample size as small as 60, you will find "nonsignificant" turning up distressingly often. Plug in a smaller sample size, and it will turn up even more often. With a larger sample size, "nonsignificant" will turn up less often, though even with samples as large as n=100 it comes up quite a lot more often than reallife researchers would ever want to contemplate.
The second simulation does the same thing, except that it draws random samples 100 at a time. Here again, the default value for sample size is set at n=60. You can change it to another value if you wish, but be advised that the underlying calculations for larger values of n might take a while, depending on the inherent speed of your computer.
A  B  C  D  n  
Observed  
Expected  (20%)  (30%)  (30%)  (20%) 
Enter sample size: n =  

Home  Click this link only if you did not arrive here via the VassarStats main page. 