Fisher's Exact Test Calculator

2x2 Contingency Table Analysis for Statistical Significance

How to Use This Tool

  1. Enter observed counts in the 2x2 Table below.
  2. Set your Significance Level (α) (default 0.05 / 95% Confidence).
  3. Click Calculate to view P-values, Confidence Intervals, and Diagnostic Metrics.
Outcome 1 (+)
Outcome 2 (-)
Total
Group A
12
Group B
14
Total
12
14
26
Confidence Level: 95%

Analysis Results

Using 2-sided testing with α = 0.05

Detailed Analysis Tables
Expected Counts
Out 1 (+) Out 2 (-)
Grp A 0.00 0.00
Grp B 0.00 0.00
If Independence Held
Deviations (O - E)
Out 1 (+) Out 2 (-)
Grp A 0.00 0.00
Grp B 0.00 0.00
Observed minus Expected
Chi-Square Contributions
Out 1 (+) Out 2 (-)
Grp A 0.00 0.00
Grp B 0.00 0.00
(O - E)² / E
Standardized Residuals
Out 1 (+) Out 2 (-)
Grp A 0.00 0.00
Grp B 0.00 0.00
(O - E) / √E
Fisher's Exact Test Results
Fisher's Two-tailed P
0.0000
Exact Probability
Left-tailed P
0.0000
Prob(X ≤ a)
Right-tailed P
0.0000
Prob(X ≥ a)
Chi-Square Test Results
Pearson Chi-Square
0.0000
Statistic (X2)
Pearson P-value
0.0000
Uncorrected
Yates Chi-Square
0.0000
Statistic (X2)
Yates P-value
0.0000
With Correction
Degrees of Freedom
1
df = (rows-1)(cols-1)
Cochran-Armitage Trend Test

Standard (Uncorrected) Trend
Scenario Stat (Z) P-value
Any Trend (2-sided) 0.00 0.00
Increasing Trend 0.00 0.00
Decreasing Trend 0.00 0.00
Continuity Corrected Trend
Scenario Stat (Z) P-value
Any Trend (2-sided) 0.00 0.00
Increasing Trend 0.00 0.00
Decreasing Trend 0.00 0.00
Armitage Rank Correlation Trend Test
Statistics
Numerator (U) 0.00
Standard Error (SE) 0.00
Hypothesis Test
Scenario Z P-value
Any Trend (2-sided) 0.00 0.00
Increasing Trend 0.00 0.00
Decreasing Trend 0.00 0.00
Risk & Clinical Metrics
Relative Risk (RR)
0.00
Risk(Grp A) / Risk(Grp B)
95% CI for RR
[0.00 - 0.00]
Log method
NNT (Benefit)
0.00
Number Needed to Treat
Likelihood Ratios
Metric Value 95% CI
LR Positive 0.00 0.00 - 0.00
LR Negative 0.00 0.00 - 0.00
Strength of Association
Odds Ratio
0.00
Cross-product ratio
95% Confidence Interval
[0.00 - 0.00]
Logit method with Haldane correction if needed
Association & Correlation Statistics
Statistic Value
Cramer's V 0.00
Gamma (Yule's Q) 0.00
Goodman & Kruskal's Tau 0.00
Kendall's tau-B (with correction for ties) 0.00
Kendall's tau-B (without correction for ties) 0.00
Kendall's tau-C 0.00
Lambda - Columns Dependent 0.00
Lambda - Rows Dependent 0.00
Lambda - Symmetric 0.00
Pearson's Contingency Coefficient 0.00
Phi Coefficient 0.00
Tschuprow's T 0.00
Yule's Y (Colligation) 0.00
Paired Data / Agreement Analysis
Note: These tests assume the 2x2 table represents Paired Data (e.g., Pre/Post or Rater 1/Rater 2), not independent groups.
McNemar Test
Chi-Square 0.00
P-value 0.00
Tests consistency (discordant pairs)
Cohen's Kappa (Agreement)
Kappa (κ) 0.00
SE 0.00
95% CI 0.00 - 0.00
Kappa Hypothesis Test
Z Score 0.00
P-value 0.00
H0: κ = 0
Diagnostic Metrics
Sensitivity (TPR)
0.00%
a / (a + c)
Specificity (TNR)
0.00%
d / (b + d)
PPV
0.00%
Precision
NPV
0.00%
Neg. Pred. Value
Step-by-Step Calculation
Show Calculation Details
Distribution Visualization
The graph below shows the hypergeometric distribution. Red shaded areas indicate the critical region (outcomes as extreme or more extreme than observed). The Blue line marks your observed value.

Understanding Your Results

Armitage Rank Correlation
Tests for linear trend by correlating group order with outcome. Similar to Cochran-Armitage.
Cochran-Armitage Trend
Tests for linear trend assuming ordered groups (Grp A < Grp B).
Increasing: Proportion of Outcome 1 increases from A to B.
Decreasing: Proportion of Outcome 1 decreases from A to B.
Cohen's Kappa
Measures inter-rater agreement for paired data, correcting for chance agreement.
Confidence Interval
Range containing true OR with (1-α)% certainty (Logit Method).
exp[ln(OR) ± Z × SE]
Cramer's V
Measure of association strength. Range [0, 1]. For 2x2, V = |φ|.
Gamma
Measure of association based on concordant vs discordant pairs (ignores ties).
(C - D) / (C + D)
Goodman & Kruskal's Tau
Proportional reduction in error measure similar to Lambda, but based on category marginals.
Kendall's Tau-b
Rank correlation coefficient accounting for ties. Range [-1, 1].
Lambda (λ)
Proportional Reduction in Error (PRE). How much does knowing the independent variable reduce error in predicting the dependent?
Likelihood Ratio (Positive)
How much more likely a positive test is found in a diseased person compared to non-diseased.
Sens / (1 - Spec)
Likelihood Ratio (Negative)
How much less likely a negative test is found in a diseased person compared to non-diseased.
(1 - Sens) / Spec
McNemar Test
Used for paired data (e.g., Pre vs Post) to test if marginal proportions changed. Uses discordant pairs (b and c).
NNT (Number Needed to Treat)
Average number of patients who need to be treated to prevent one additional bad outcome.
1 / |Risk A - Risk B|
NPV
Negative Predictive Value: Prob. of Outcome 2 given Group B.
d / (c + d)
Odds Ratio (OR)
The strength of association. OR=1 implies no association.
OR = (a × d) / (b × c)
Pearson Chi-Square
Test for independence (approximation).
X2 = ∑ (O-E)2 / E
Phi Coefficient (φ)
Measure of association for 2x2 tables. Range [-1, 1].
φ = √(X2 / N) (signed)
PPV (Precision)
Positive Predictive Value: Prob. of Outcome 1 given Group A.
a / (a + b)
P-value (Two-tailed)
Probability that the association (or one more extreme) occurred by chance.
∑p(i) where p(i) ≤ p(observed)
Relative Risk (RR)
Ratio of the probability of the outcome in Group A vs Group B.
RR = [a/(a+b)] / [c/(c+d)]
Sensitivity
% of true positive cases (Outcome 1) correctly identified.
a / (a + c)
Specificity
% of true negative cases (Outcome 2) correctly identified.
d / (b + d)
Yates' Chi-Square
Chi-Square with continuity correction. More conservative than Pearson.
X2 = ∑ (|O-E|-0.5)2 / E
Yule's Y
Coefficient of colligation. Measure of association based on the odds ratio.

What is Fisher's Exact Test?

Fisher's Exact Test determines if there is a non-random association between two categorical variables. It calculates the exact probability of the data using the hypergeometric distribution rather than an approximation.

Hypotheses

  • Null Hypothesis (H0): There is no association between the two variables (The proportions are equal; Odds Ratio = 1).
  • Alternative Hypothesis (H1): There is an association between the two variables (The proportions are not equal; Odds Ratio ≠ 1).

Assumptions

  • Random Sampling: The data comes from a random sample.
  • Independence: Observations are independent of each other.
  • Fixed Marginals: Technically, Fisher's test assumes row and column totals are fixed, though it is valid even if they are not.

The Formula: The exact probability (p) of observing a specific 2x2 table configuration (with cell counts a, b, c, d) given the fixed marginal totals is:

p =
(a+b)! (c+d)! (a+c)! (b+d)!
a! b! c! d! n!

Calculating the P-value: The P-value is not just the probability of the observed table. It is calculated by generating all possible tables that have the same marginal totals (row and column sums). We compute the hypergeometric probability for each possible table, and the final P-value is the sum of probabilities for all tables that are as extreme or more extreme (less likely) than the observed table.

Chi-Square Comparison: We also provide the Chi-Square test P-value (with Yates' continuity correction). While Chi-Square is an approximation, it converges with Fisher's test as sample sizes increase. Significant discrepancies between the two usually indicate that sample sizes are too small for Chi-Square.

Frequently Asked Questions

When should I use Fisher's Exact Test instead of Chi-Square?

Fisher's Exact Test is recommended when sample sizes are small. A common rule of thumb is to use Fisher's test when any expected cell count in the 2x2 table is less than 5. Unlike the Chi-Square test, which relies on large-sample approximations, Fisher's test provides an exact P-value valid for any sample size.

What is the difference between One-tailed and Two-tailed P-values?

A two-tailed P-value tests for any difference between the groups (Group A ≠ Group B). A one-tailed P-value tests for a specific direction of difference (e.g., is Group A > Group B specifically?). In most scientific research, the two-tailed test is the standard unless you have a strong a priori reason to expect a difference in only one direction.

What does the Odds Ratio tell me?

The Odds Ratio (OR) quantifies the strength of association between two events. An OR of 1 implies no association. An OR greater than 1 implies the event is more likely in the first group. If the 95% Confidence Interval for the OR includes 1, the result is typically not statistically significant at the 0.05 level.

Does this calculator handle large sample sizes?

Yes. This calculator uses advanced Log-Gamma functions to compute factorials, allowing it to handle very large sample sizes (up to thousands or more) without the numerical overflow errors common in standard factorial calculations.