# How to Perform Dunnett’s Test in R

Dunnett’s test in R, After the ANOVA test has been completed the next step is to determine which group means are significantly different from one another.

Different types of post hoc tests are available and most often used once are Tukey HSD multiple comparison test and Dunnet test.

If our treatment groups contain a control group, and the experimenter wishes to determine whether the tested groups are significantly different from the control group, Dunnett’s test comes in handy.

This tutorial describes how to employee Dunnett’s test in R

How to find dataset differences in R Quickly Compare Datasets »

## Dunnett’s test in R

Assume there are three groups. A first group is a control group, while the other two are test groups. The dosages of the test groups are varied, and we want to know if the test groups differ significantly from the control group.

As we all know, the first step is to visually inspect the data and run a one-way ANOVA followed by a Dunnett test.

### Step 1: Create a Data Frame

set.seed(23) data <- data.frame(Group = rep(c("control", "Test1", "Test2"), each = 10), value = c(rnorm(10), rnorm(10),rnorm(10))) data$Group<-as.factor(data$Group)

Let’s see the data frame

head(data)

Group value 1 control 0.1932123 2 control -0.4346821 3 control 0.9132671 4 control 1.7933881 5 control 0.9966051 6 control 1.1074905

### Step 2: Visualize the values for each group.

To see the distribution of values in each group, we can use a box plot or a violin plot.

Box plot in R

boxplot(value~ Group, data = data, main = "Product Values", xlab = "Groups", ylab = "Value", col = "red", border = "black")

On the basis of visualization, it is possible to distinguish Test1 and Test2 from the control groups. Let’s look at the data using ANOVA and then Dunnett’s test.

Stringr in r 10 data manipulation Tips and Tricks »

### Step 3: ANOVA Comparison

We can calculate F and p values using the aov function. Let’s put the model together.

model <- aov(value ~ Group, data = data) summary(model)

Df Sum Sq Mean Sq F value Pr(>F) Group 2 4.407 2.2036 3.71 0.0377 * Residuals 27 16.035 0.5939 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Based on the ANOVA model the p-value is statistically significant (p<0.05), indicate that each group does not have the same average values.

Let’s perform Dunnett’s test and identify which groups are statistically significant.

How to find the Mean Deviation? MD Vs MAD-Quick Guide »

### Step 4: Dunnett’s Test.

We can make use of DunnettTest() function from the DescTools package.

Dunnett’s Function syntax is as follows,

DunnettTest(x, g)

where:

x: Numerical values

g: Group Names

Let’s load the library and perform Dunnett’s test

library(DescTools) DunnettTest(x=data$value, g=data$Group)

Dunnett's test for comparing several treatments with a control : 95% family-wise confidence level $control diff lwr.ci upr.ci pval Test1-control -0.8742469 -1.678514 -0.06998022 0.0320 * Test2-control -0.7335283 -1.537795 0.07073836 0.0768 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

## Conclusion

The Control group scored significantly higher values compared to Test 1 (p<0.05) and Test2 (p<0.1).

Intraday Stock Trading in R » Strategy & Chart Reading »

*Subscribe to the Newsletter and COMMENT below!*

*[newsletter_form type=”minimal”]*

I don’t get it. If the data for all three groups were sampled from the same distribution, where did the significant differences come from? If the differences are due to sampling error, shouldn’t we expect the differences to be non-significant?

The rnorm function was used to generate sample data, indicating that the samples were selected randomly from normally distributed populations.

You’re right. The example is sampling from the same population, so the difference is due to chance alone. Try running the same code several times and check how often the Anova output comes out significant.