This article describes how to make a **graph of correlation matrix** in **R**. The R **symnum()** function is used. It takes the **correlation table** as an argument. The result is a table in which **correlation coefficients** are replaced by symbols according to the **degree of correlation**.

Note that online software is also available here to compute

The R function **symnum** can be used to easily highlight the highly correlated variables. It replaces correlation coefficients by symbols according to the value.

The simplified format of the function is :

`symnum(x, cutpoints = c(0.3, 0.6, 0.8, 0.9, 0.95),`

symbols = c(" ", ".", ",", "+", "*", "B"))

- **x** is the correlation matrix to visualize

- **cutpoints** : **correlation coefficient** cutpoints. The **correlation coefficients** between 0 and 0.3 are replaced by a space (" “ **correlation coefficients** between 0.3 and 0.6 are replace by”.“; etc …

- **symbols** : the symbols to use.

The following R code performs a **correlation analysis** and displays a **graph of the correlation matrix** :

`## Correlation matrix`

corMat<-cor(mtcars)

head(round(corMat,2))

` mpg cyl disp hp drat wt qsec vs am gear carb`

mpg 1.00 -0.85 -0.85 -0.78 0.68 -0.87 0.42 0.66 0.60 0.48 -0.55

cyl -0.85 1.00 0.90 0.83 -0.70 0.78 -0.59 -0.81 -0.52 -0.49 0.53

disp -0.85 0.90 1.00 0.79 -0.71 0.89 -0.43 -0.71 -0.59 -0.56 0.39

hp -0.78 0.83 0.79 1.00 -0.45 0.66 -0.71 -0.72 -0.24 -0.13 0.75

drat 0.68 -0.70 -0.71 -0.45 1.00 -0.71 0.09 0.44 0.71 0.70 -0.09

wt -0.87 0.78 0.89 0.66 -0.71 1.00 -0.17 -0.55 -0.69 -0.58 0.43

`## Correlation graph for visualization`

## abbr.colnames=FALSE to avoid abbreviation of column names

symnum(corMat, abbr.colnames=FALSE)

` mpg cyl disp hp drat wt qsec vs am gear carb`

mpg 1

cyl + 1

disp + * 1

hp , + , 1

drat , , , . 1

wt + , + , , 1

qsec . . . , 1

vs , + , , . . , 1

am . . . , , 1

gear . . . , . , 1

carb . . . , . , . 1

attr(,"legend")

[1] 0 ' ' 0.3 '.' 0.6 ',' 0.8 '+' 0.9 '*' 0.95 'B' 1

One of the easy way to visualize

`This analysis was performed using R (ver. 3.1.0).`

[/html]]]>

Previously, we described the essentials of R programming and provided quick start guides for importing data into **R**. Additionally, we described how to compute descriptive or summary statistics, correlation analysis, as well as, how to compare sample means using R software.

This chapter contains articles describing **statistical tests** to use for **comparing variances**.

- What is F-test?
- When to you use the F-test?
- Research questions and statistical hypotheses
- Formula of F-test
- Compute F-test in R

Read more: —> F-Test: Compare Two Variances in R.

This article describes **statistical tests** for comparing the **variances** of two or more samples.

- Compute
**Bartlett’s test**in R - Compute
**Levene’s test**in R - Compute
**Fligner-Killeen**test in R

Read more: —> Compare Multiple Sample Variances in R.

This analysis has been performed using **R statistical software** (ver. 3.2.4).

**Contents**

Comparing two variances is useful in several cases, including:

When you want to perform a two samples t-test to check the equality of the variances of the two samples

When you want to compare the variability of a new measurement method to an old one. Does the new method reduce the variability of the measure?

Typical research questions are:

- whether the variance of group A (\(\sigma^2_A\))
*is equal*to the variance of group B (\(\sigma^2_B\))? - whether the variance of group A (\(\sigma^2_A\))
*is less than*the variance of group B (\(\sigma^2_B\))? - whether the variance of group A (\(\sigma^2_A\))
*is greather than*the variance of group B (\(\sigma^2_B\))?

In statistics, we can define the corresponding *null hypothesis* (\(H_0\)) as follow:

- \(H_0: \sigma^2_A = \sigma^2_B\)
- \(H_0: \sigma^2_A \leq \sigma^2_B\)
- \(H_0: \sigma^2_A \geq \sigma^2_B\)

The corresponding *alternative hypotheses* (\(H_a\)) are as follow:

- \(H_a: \sigma^2_A \ne \sigma^2_B\) (different)
- \(H_a: \sigma^2_A > \sigma^2_B\) (greater)
- \(H_a: \sigma^2_A < \sigma^2_B\) (less)

Note that:

- Hypotheses 1) are called
**two-tailed tests** - Hypotheses 2) and 3) are called
**one-tailed tests**

The test statistic can be obtained by computing the ratio of the two variances \(S_A^2\) and \(S_B^2\).

\[F = \frac{S_A^2}{S_B^2}\]

The degrees of freedom are \(n_A - 1\) (for the numerator) and \(n_B - 1\) (for the denominator).

Note that, the more this ratio deviates from 1, the stronger the evidence for unequal population variances.

Note that, the F-test requires the two samples to be normally distributed.

The R function **var.test**() can be used to compare two variances as follow:

```
# Method 1
var.test(values ~ groups, data,
alternative = "two.sided")
# or Method 2
var.test(x, y, alternative = "two.sided")
```

**x,y**: numeric vectors**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

To import your data, use the following R code:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use the built-in R data set named ToothGrowth:

```
# Store the data in the variable my_data
my_data <- ToothGrowth
```

To have an idea of what the data look like, we start by displaying a random sample of 10 rows using the function **sample_n**()[in **dplyr** package]:

```
library("dplyr")
sample_n(my_data, 10)
```

```
len supp dose
43 23.6 OJ 1.0
28 21.5 VC 2.0
25 26.4 VC 2.0
56 30.9 OJ 2.0
46 25.2 OJ 1.0
7 11.2 VC 0.5
16 17.3 VC 1.0
4 5.8 VC 0.5
48 21.2 OJ 1.0
37 8.2 OJ 0.5
```

We want to test the equality of variances between the two groups OJ and VC in the column “supp”.

F-test is very sensitive to departure from the normal assumption. You need to check whether the data is normally distributed before using the F-test.

Shapiro-Wilk test can be used to test whether the normal assumption holds. It’s also possible to use **Q-Q plot** (quantile-quantile plot) to graphically evaluate the normality of a variable. Q-Q plot draws the correlation between a given sample and the normal distribution.

If there is doubt about normality, the better choice is to use **Levene’s test** or **Fligner-Killeen test**, which are less sensitive to departure from normal assumption.

```
# F-test
res.ftest <- var.test(len ~ supp, data = my_data)
res.ftest
```

```
F test to compare two variances
data: len by supp
F = 0.6386, num df = 29, denom df = 29, p-value = 0.2331
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.3039488 1.3416857
sample estimates:
ratio of variances
0.6385951
```

**F-test** is p = 0.2331433 which is greater than the significance level 0.05. In conclusion, there is no significant difference between the two variances.

The function **var.test**() returns a list containing the following components:

**statistic**: the value of the F test statistic.**parameter**: the degrees of the freedom of the F distribution of the test statistic.**p.value**: the p-value of the test.**conf.int**: a confidence interval for the ratio of the population variances.**estimate**: the ratio of the sample variances

The format of the **R** code to use for getting these values is as follow:

```
# ratio of variances
res.ftest$estimate
```

```
ratio of variances
0.6385951
```

```
# p-value of the test
res.ftest$p.value
```

`[1] 0.2331433`

This analysis has been performed using **R software** (ver. 3.3.2).

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use the built-in R data set named *PlantGrowth*. It contains the weight of plants obtained under a control and two different treatment conditions.

`my_data <- PlantGrowth`

```
# print the head of the file
head(my_data)
```

```
weight group
1 4.17 ctrl
2 5.58 ctrl
3 5.18 ctrl
4 6.11 ctrl
5 4.50 ctrl
6 4.61 ctrl
```

In R terminology, the column “group” is called factor and the different categories (“ctr”, “trt1”, “trt2”) are named factor levels. **The levels are ordered alphabetically**.

```
# Show the group levels
levels(my_data$group)
```

`[1] "ctrl" "trt1" "trt2"`

If the levels are not automatically in the correct order, re-order them as follow:

```
my_data$group <- ordered(my_data$group,
levels = c("ctrl", "trt1", "trt2"))
```

It’s possible to compute summary statistics by groups. The dplyr package can be used.

- To install
**dplyr**package, type this:

`install.packages("dplyr")`

- Compute summary statistics by groups:

```
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
```

```
Source: local data frame [3 x 6]
group count mean sd median IQR
(fctr) (int) (dbl) (dbl) (dbl) (dbl)
1 ctrl 10 5.032 0.5830914 5.155 0.7425
2 trt1 10 4.661 0.7936757 4.550 0.6625
3 trt2 10 5.526 0.4425733 5.435 0.4675
```

To use R base graphs read this: R base graphs. Here, we’ll use the

**ggpubr**R package for an easy ggplot2-based data visualization.Install the latest version of ggpubr from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

- Visualize your data with ggpubr:

```
# Box plots
# ++++++++++++++++++++
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
```

```
# Mean plots
# ++++++++++++++++++++
# Plot weight by group
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)
library("ggpubr")
ggline(my_data, x = "group", y = "weight",
add = c("mean_se", "jitter"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
```

We want to know if there is any significant difference between the average weights of plants in the 3 experimental conditions.

The test can be performed using the function **kruskal.test**() as follow:

`kruskal.test(weight ~ group, data = my_data)`

```
Kruskal-Wallis rank sum test
data: weight by group
Kruskal-Wallis chi-squared = 7.9882, df = 2, p-value = 0.01842
```

As the p-value is less than the significance level 0.05, we can conclude that there are significant differences between the treatment groups.

From the output of the Kruskal-Wallis test, we know that there is a significant difference between groups, but we don’t know which pairs of groups are different.

It’s possible to use the function **pairwise.wilcox.test**() to calculate pairwise comparisons between group levels with corrections for multiple testing.

```
pairwise.wilcox.test(PlantGrowth$weight, PlantGrowth$group,
p.adjust.method = "BH")
```

```
Pairwise comparisons using Wilcoxon rank sum test
data: PlantGrowth$weight and PlantGrowth$group
ctrl trt1
trt1 0.199 -
trt2 0.095 0.027
P value adjustment method: BH
```

The pairwise comparison shows that, only trt1 and trt2 are significantly different (p < 0.05).

- Analysis of variance (ANOVA, parametric):

This analysis has been performed using **R software** (ver. 3.2.4).

In the situation where there multiple response variables you can test them simultaneously using a **multivariate analysis of variance** (**MANOVA**). This article describes how to compute **manova** in R.

For example, we may conduct an experiment where we give two treatments (A and B) to two groups of mice, and we are interested in the weight and height of mice. In that case, the weight and height of mice are two dependent variables, and our hypothesis is that both together are affected by the difference in treatment. A multivariate analysis of variance could be used to test this hypothesis.

MANOVA can be used in certain conditions:

The dependent variables should be normally distribute within groups. The R function

**mshapiro.test**( )[in the**mvnormtest**package] can be used to perform the Shapiro-Wilk test for multivariate normality. This is useful in the case of MANOVA, which assumes**multivariate normality**.Homogeneity of variances across the range of predictors.

Linearity between all pairs of dependent variables, all pairs of covariates, and all dependent variable-covariate pairs in each cell

If the global multivariate test is significant, we conclude that the corresponding effect (treatment) is significant. In that case, the next question is to determine if the treatment affects only the weight, only the height or both. In other words, we want to identify the specific dependent variables that contributed to the significant global effect.

To answer this question, we can use one-way ANOVA (or univariate ANOVA) to examine separately each dependent variable.

**Prepare your data**as specified here: [url=/wiki/best-practices-for-preparing-your-data-set-for-r]Best practices for preparing your data set for R[/url]**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use iris data set:

```
# Store the data in the variable my_data
my_data <- iris
```

The R code below display a random sample of our data using the function **sample_n**()[in **dplyr** package]. First, install dplyr if you don’t have it:

`install.packages("dplyr")`

```
# Show a random sample
set.seed(1234)
dplyr::sample_n(my_data, 10)
```

```
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
94 5.0 2.3 3.3 1.0 versicolor
91 5.5 2.6 4.4 1.2 versicolor
93 5.8 2.6 4.0 1.2 versicolor
127 6.2 2.8 4.8 1.8 virginica
150 5.9 3.0 5.1 1.8 virginica
2 4.9 3.0 1.4 0.2 setosa
34 5.5 4.2 1.4 0.2 setosa
96 5.7 3.0 4.2 1.2 versicolor
74 6.1 2.8 4.7 1.2 versicolor
98 6.2 2.9 4.3 1.3 versicolor
```

Question: We want to know if there is any significant difference, in *sepal* and *petal* length, between the different species.

The function **manova**() can be used as follow:

```
sepl <- iris$Sepal.Length
petl <- iris$Petal.Length
# MANOVA test
res.man <- manova(cbind(Sepal.Length, Petal.Length) ~ Species, data = iris)
summary(res.man)
```

```
Df Pillai approx F num Df den Df Pr(>F)
Species 2 0.9885 71.829 4 294 < 2.2e-16 ***
Residuals 147
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```

```
# Look to see which differ
summary.aov(res.man)
```

```
Response Sepal.Length :
Df Sum Sq Mean Sq F value Pr(>F)
Species 2 63.212 31.606 119.26 < 2.2e-16 ***
Residuals 147 38.956 0.265
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Response Petal.Length :
Df Sum Sq Mean Sq F value Pr(>F)
Species 2 437.10 218.551 1180.2 < 2.2e-16 ***
Residuals 147 27.22 0.185
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```

From the output above, it can be seen that the two variables are highly significantly different among Species.

- Analysis of variance (ANOVA, parametric):
- [url=/wiki/one-way-anova-test-in-r]One-Way ANOVA Test in R[/url]
- [url=/wiki/two-way-anova-test-in-r]Two-Way ANOVA Test in R[/url]

- [url=/wiki/kruskal-wallis-test-in-r]Kruskal-Wallis Test in R (non parametric alternative to one-way ANOVA)[/url]

This analysis has been performed using **R software** (ver. 3.2.4).

The **paired samples Wilcoxon test** (also known as **Wilcoxon signed-rank test**) is a **non-parametric** alternative to paired t-test used to compare paired data. It’s used when your data are not normally distributed. This tutorial describes how to compute paired samples Wilcoxon test in **R**.

Differences between paired samples should be distributed symmetrically around the median.

The R function **wilcox.test**() can be used as follow:

`wilcox.test(x, y, paired = TRUE, alternative = "two.sided")`

**x,y**: numeric vectors**paired**: a logical value specifying that we want to compute a paired Wilcoxon test**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use an example data set, which contains the weight of 10 mice before and after the treatment.

```
# Data in two numeric vectors
# ++++++++++++++++++++++++++
# Weight of the mice before treatment
before <-c(200.1, 190.9, 192.7, 213, 241.4, 196.9, 172.2, 185.5, 205.2, 193.7)
# Weight of the mice after treatment
after <-c(392.9, 393.2, 345.1, 393, 434, 427.9, 422, 383.9, 392.3, 352.2)
# Create a data frame
my_data <- data.frame(
group = rep(c("before", "after"), each = 10),
weight = c(before, after)
)
```

We want to know, if there is any significant difference in the median weights before and after treatment?

```
# Print all data
print(my_data)
```

```
group weight
1 before 200.1
2 before 190.9
3 before 192.7
4 before 213.0
5 before 241.4
6 before 196.9
7 before 172.2
8 before 185.5
9 before 205.2
10 before 193.7
11 after 392.9
12 after 393.2
13 after 345.1
14 after 393.0
15 after 434.0
16 after 427.9
17 after 422.0
18 after 383.9
19 after 392.3
20 after 352.2
```

Compute summary statistics (median and inter-quartile range (IQR)) by groups using the dplyr package can be used.

- Install
**dplyr**package:

`install.packages("dplyr")`

- Compute summary statistics by groups:

```
library("dplyr")
group_by(my_data, group) %>%
summarise(
count = n(),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
```

```
Source: local data frame [2 x 4]
group count median IQR
(fctr) (int) (dbl) (dbl)
1 after 10 392.95 28.800
2 before 10 195.30 12.575
```

To use R base graphs read this: R base graphs. Here, we’ll use the

**ggpubr**R package for an easy ggplot2-based data visualization.Install the latest version of ggpubr from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

- Visualize your data:

```
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
order = c("before", "after"),
ylab = "Weight", xlab = "Groups")
```

Box plots show you the increase, but lose the paired information. You can use the function **plot.paired**() [in **pairedData** package] to plot paired data (“before - after” plot).

- Install pairedData package:

`install.packages("PairedData")`

- Plot paired data:

```
# Subset weight data before treatment
before <- subset(my_data, group == "before", weight,
drop = TRUE)
# subset weight data after treatment
after <- subset(my_data, group == "after", weight,
drop = TRUE)
# Plot paired data
library(PairedData)
pd <- paired(before, after)
plot(pd, type = "profile") + theme_bw()
```

Question : Is there any significant changes in the weights of mice before after treatment?

**1) Compute paired Wilcoxon test - Method 1**: The data are saved in two different numeric vectors.

```
res <- wilcox.test(before, after, paired = TRUE)
res
```

```
Wilcoxon signed rank test
data: before and after
V = 0, p-value = 0.001953
alternative hypothesis: true location shift is not equal to 0
```

**2) Compute paired Wilcoxon-test - Method 2**: The data are saved in a data frame.

```
# Compute t-test
res <- wilcox.test(weight ~ group, data = my_data, paired = TRUE)
res
```

```
Wilcoxon signed rank test
data: weight by group
V = 55, p-value = 0.001953
alternative hypothesis: true location shift is not equal to 0
```

```
# print only the p-value
res$p.value
```

`[1] 0.001953125`

As you can see, the two methods give the same results.

The **p-value** of the test is 0.001953, which is less than the significance level alpha = 0.05. We can conclude that the median weight of the mice before treatment is significantly different from the median weight after treatment with a **p-value** = 0.001953.

Note that:

- if you want to test whether the median weight before treatment is less than the median weight after treatment, type this:

```
wilcox.test(weight ~ group, data = my_data, paired = TRUE,
alternative = "less")
```

- Or, if you want to test whether the median weight before treatment is greater than the median weight after treatment, type this

```
wilcox.test(weight ~ group, data = my_data, paired = TRUE,
alternative = "greater")
```

You can perform **paired-sample Wilcoxon test**, **online**, without any installation by clicking the following link:

This analysis has been performed using **R software** (ver. 3.2.4).

The **unpaired two-samples Wilcoxon test** (also known as **Wilcoxon rank sum test** or **Mann-Whitney** test) is a non-parametric alternative to the unpaired two-samples t-test, which can be used to compare two independent groups of samples. It’s used when your data are not normally distributed.

This article describes how to compute two samples Wilcoxon test in R.

To perform two-samples Wilcoxon test comparing the means of two independent samples (x & y), the R function **wilcox.test**() can be used as follow:

`wilcox.test(x, y, alternative = "two.sided")`

**x,y**: numeric vectors**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use an example data set, which contains the weight of 18 individuals (9 women and 9 men):

```
# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4)
# Create a data frame
my_data <- data.frame(
group = rep(c("Woman", "Man"), each = 9),
weight = c(women_weight, men_weight)
)
```

We want to know, if the median women’s weight differs from the median men’s weight?

`print(my_data)`

```
group weight
1 Woman 38.9
2 Woman 61.2
3 Woman 73.3
4 Woman 21.8
5 Woman 63.4
6 Woman 64.6
7 Woman 48.4
8 Woman 48.8
9 Woman 48.5
10 Man 67.8
11 Man 60.0
12 Man 63.4
13 Man 76.0
14 Man 89.4
15 Man 73.3
16 Man 67.3
17 Man 61.3
18 Man 62.4
```

It’s possible to compute summary statistics (median and interquartile range (IQR)) by groups. The dplyr package can be used.

- To install
**dplyr**package, type this:

`install.packages("dplyr")`

- Compute summary statistics by groups:

```
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
```

```
Source: local data frame [2 x 4]
group count median IQR
(fctr) (int) (dbl) (dbl)
1 Man 9 67.3 10.9
2 Woman 9 48.8 15.0
```

You can draw R base graphs as described at this link: R base graphs. Here, we’ll use the **ggpubr** R package for an easy ggplot2-based data visualization

- Install the latest version of ggpubr from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

- Visualize your data:

```
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
ylab = "Weight", xlab = "Groups")
```

Question : Is there any significant difference between women and men weights?

**1) Compute two-samples Wilcoxon test - Method 1**: The data are saved in two different numeric vectors.

```
res <- wilcox.test(women_weight, men_weight)
res
```

```
Wilcoxon rank sum test with continuity correction
data: women_weight and men_weight
W = 15, p-value = 0.02712
alternative hypothesis: true location shift is not equal to 0
```

It will give a warning message, saying that “cannot compute exact p-value with tie”. It comes from the assumption of a Wilcoxon test that the responses are continuous. You can suppress this message by adding another argument **exact = FALSE**, but the result will be the same.

**2) Compute two-samples Wilcoxon test - Method 2**: The data are saved in a data frame.

```
res <- wilcox.test(weight ~ group, data = my_data,
exact = FALSE)
res
```

```
Wilcoxon rank sum test with continuity correction
data: weight by group
W = 66, p-value = 0.02712
alternative hypothesis: true location shift is not equal to 0
```

```
# Print the p-value only
res$p.value
```

`[1] 0.02711657`

As you can see, the two methods give the same results.

The **p-value** of the test is 0.02712, which is less than the significance level alpha = 0.05. We can conclude that men’s median weight is significantly different from women’s median weight with a **p-value** = 0.02712.

Note that:

- if you want to test whether the median men’s weight is less than the median women’s weight, type this:

```
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "less")
```

- Or, if you want to test whether the median men’s weight is greater than the median women’s weight, type this

```
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "greater")
```

You can perform unpaired **two-samples Wilcoxon test**, **online**, without any installation by clicking the following link:

- Compare one-sample mean to a standard known mean
- Compare the means of two independent groups

This analysis has been performed using **R software** (ver. 3.2.4).

- What is unpaired two-samples t-test?
- Research questions and statistical hypotheses
- Formula of unpaired two-samples t-test
- Visualize your data and compute unpaired two-samples t-test in R
- Install ggpubr R package for data visualization
- R function to compute unpaired two-samples t-test
- Import your data into R
- Check your data
- Visualize your data using box plots
- Preleminary test to check independent t-test assumptions
- Compute unpaired two-samples t-test
- Interpretation of the result
- Access to the values returned by t.test() function

- Online unpaired two-samples t-test calculator
- See also
- Infos

The **unpaired two-samples t-test** is used to compare the **mean** of two independent groups.

For example, suppose that we have measured the weight of 100 individuals: 50 women (group A) and 50 men (group B). We want to know if the mean weight of women (\(m_A\)) is significantly different from that of men (\(m_B\)).

In this case, we have two unrelated (i.e., independent or unpaired) groups of samples. Therefore, it’s possible to use an **independent t-test** to evaluate whether the means are different.

Note that, unpaired two-samples t-test can be used only under certain conditions:

- when the two groups of samples (A and B), being compared, are
**normally distributed**. This can be checked using**Shapiro-Wilk test**. - and when the
**variances**of the two groups are equal. This can be checked using**F-test**.

This article describes the formula of the independent t-test and provides pratical examples in R.

Typical research questions are:

- whether the mean of group A (\(m_A\))
*is equal*to the mean of group B (\(m_B\))? - whether the mean of group A (\(m_A\))
*is less than*the mean of group B (\(m_B\))? - whether the mean of group A (\(m_A\))
*is greather than*the mean of group B (\(m_B\))?

In statistics, we can define the corresponding *null hypothesis* (\(H_0\)) as follow:

- \(H_0: m_A = m_B\)
- \(H_0: m_A \leq m_B\)
- \(H_0: m_A \geq m_B\)

The corresponding *alternative hypotheses* (\(H_a\)) are as follow:

- \(H_a: m_A \ne m_B\) (different)
- \(H_a: m_A > m_B\) (greater)
- \(H_a: m_A < m_B\) (less)

Note that:

- Hypotheses 1) are called
**two-tailed tests** - Hypotheses 2) and 3) are called
**one-tailed tests**

**Classical t-test**:

If the variance of the two groups are equivalent (**homoscedasticity**), the **t-test value**, comparing the two samples (\(A\) and \(B\)), can be calculated as follow.

\[ t = \frac{m_A - m_B}{\sqrt{ \frac{S^2}{n_A} + \frac{S^2}{n_B} }} \]

where,

- \(m_A\) and \(m_B\) represent the mean value of the group A and B, respectively.
- \(n_A\) and \(n_B\) represent the sizes of the group A and B, respectively.
- \(S^2\) is an estimator of the pooled
**variance**of the two groups. It can be calculated as follow :

\[ S^2 = \frac{\sum{(x-m_A)^2}+\sum{(x-m_B)^2}}{n_A+n_B-2} \]

with degrees of freedom (df): \(df = n_A + n_B - 2\).

2.**Welch t-statistic**:

If the variances of the two groups being compared are different (**heteroscedasticity**), it’s possible to use the **Welch t test**, an adaptation of Student t-test.

**Welch t-statistic** is calculated as follow :

\[ t = \frac{m_A - m_B}{\sqrt{ \frac{S_A^2}{n_A} + \frac{S_B^2}{n_B} }} \]

where, \(S_A\) and \(S_B\) are the standard deviation of the the two groups A and B, respectively.

Unlike the classic Student’s t-test, **Welch t-test formula** involves the variance of each of the two groups (\(S_A^2\) and \(S_B^2\)) being compared. In other words, it does not use the pooled variance\(S\).

The **degrees of freedom** of **Welch t-test** is estimated as follow :

\[ df = (\frac{S_A^2}{n_A}+ \frac{S_B^2}{n_B^2}) / (\frac{S_A^4}{n_A^2(n_B-1)} + \frac{S_B^4}{n_B^2(n_B-1)} ) \]

A p-value can be computed for the corresponding absolute value of t-statistic (|t|).

Note that, the Welch t-test is considered as the safer one. Usually, the results of the **classical t-test** and the **Welch t-test** are very similar unless both the group sizes and the standard deviations are very different.

How to interpret the results?

If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we can conclude that the mean values of group A and B are significantly different.

You can draw R base graphs as described at this link: R base graphs. Here, we’ll use the **ggpubr** R package for an easy ggplot2-based data visualization

- Install the latest version from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

To perform two-samples t-test comparing the means of two independent samples (x & y), the R function **t.test**() can be used as follow:

`t.test(x, y, alternative = "two.sided", var.equal = FALSE)`

**x,y**: numeric vectors**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.**var.equal**: a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch test is used.

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use an example data set, which contains the weight of 18 individuals (9 women and 9 men):

```
# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4)
# Create a data frame
my_data <- data.frame(
group = rep(c("Woman", "Man"), each = 9),
weight = c(women_weight, men_weight)
)
```

We want to know, if the average women’s weight differs from the average men’s weight?

```
# Print all data
print(my_data)
```

```
group weight
1 Woman 38.9
2 Woman 61.2
3 Woman 73.3
4 Woman 21.8
5 Woman 63.4
6 Woman 64.6
7 Woman 48.4
8 Woman 48.8
9 Woman 48.5
10 Man 67.8
11 Man 60.0
12 Man 63.4
13 Man 76.0
14 Man 89.4
15 Man 73.3
16 Man 67.3
17 Man 61.3
18 Man 62.4
```

It’s possible to compute summary statistics (mean and sd) by groups. The dplyr package can be used.

- To install
**dplyr**package, type this:

`install.packages("dplyr")`

- Compute summary statistics by groups:

```
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE)
)
```

```
Source: local data frame [2 x 4]
group count mean sd
(fctr) (int) (dbl) (dbl)
1 Man 9 68.98889 9.375426
2 Woman 9 52.10000 15.596714
```

```
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
ylab = "Weight", xlab = "Groups")
```

**Assumption 1**: Are the two samples independents?

Yes, since the samples from men and women are not related.

**Assumtion 2**: Are the data from each of the 2 groups follow a normal distribution?

Use Shapiro-Wilk normality test as described at: Normality Test in R. - Null hypothesis: the data are normally distributed - Alternative hypothesis: the data are not normally distributed

We’ll use the functions **with**() and **shapiro.test**() to compute Shapiro-Wilk test for each group of samples.

```
# Shapiro-Wilk normality test for Men's weights
with(my_data, shapiro.test(weight[group == "Man"]))# p = 0.1
# Shapiro-Wilk normality test for Women's weights
with(my_data, shapiro.test(weight[group == "Woman"])) # p = 0.6
```

From the output, the two p-values are greater than the significance level 0.05 implying that the distribution of the data are not significantly different from the normal distribution. In other words, we can assume the normality.

Note that, if the data are not normally distributed, it’s recommended to use the non parametric two-samples Wilcoxon rank test.

**Assumption 3**. Do the two populations have the same variances?

We’ll use **F-test** to test for homogeneity in variances. This can be performed with the function **var.test**() as follow:

```
res.ftest <- var.test(weight ~ group, data = my_data)
res.ftest
```

```
F test to compare two variances
data: weight by group
F = 0.36134, num df = 8, denom df = 8, p-value = 0.1714
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.08150656 1.60191315
sample estimates:
ratio of variances
0.3613398
```

**F-test** is p = 0.1713596. It’s greater than the significance level alpha = 0.05. In conclusion, there is no significant difference between the variances of the two sets of data. Therefore, we can use the classic **t-test** witch assume equality of **the two variances**.

Question : Is there any significant difference between women and men weights?

**1) Compute independent t-test - Method 1**: The data are saved in two different numeric vectors.

```
# Compute t-test
res <- t.test(women_weight, men_weight, var.equal = TRUE)
res
```

```
Two Sample t-test
data: women_weight and men_weight
t = -2.7842, df = 16, p-value = 0.01327
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-29.748019 -4.029759
sample estimates:
mean of x mean of y
52.10000 68.98889
```

**2) Compute independent t-test - Method 2**: The data are saved in a data frame.

```
# Compute t-test
res <- t.test(weight ~ group, data = my_data, var.equal = TRUE)
res
```

```
Two Sample t-test
data: weight by group
t = 2.7842, df = 16, p-value = 0.01327
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
4.029759 29.748019
sample estimates:
mean in group Man mean in group Woman
68.98889 52.10000
```

As you can see, the two methods give the same results.

In the result above :

**t**is the**t-test statistic**value (t = 2.784),**df**is the degrees of freedom (df= 16),**p-value**is the significance level of the**t-test**(p-value = 0.01327).**conf.int**is the**confidence interval**of the mean at 95% (conf.int = [4.0298, 29.748]);**sample estimates**is he mean value of the sample (mean = 68.9888889, 52.1).

Note that:

- if you want to test whether the average men’s weight is less than the average women’s weight, type this:

```
t.test(weight ~ group, data = my_data,
var.equal = TRUE, alternative = "less")
```

- Or, if you want to test whether the average men’s weight is greater than the average women’s weight, type this

```
t.test(weight ~ group, data = my_data,
var.equal = TRUE, alternative = "greater")
```

The **p-value** of the test is 0.01327, which is less than the significance level alpha = 0.05. We can conclude that men’s average weight is significantly different from women’s average weight with a **p-value** = 0.01327.

The result of **t.test()** function is a list containing the following components:

**statistic**: the value of the**t test statistics****parameter**: the**degrees of freedom**for the**t test statistics****p.value**: the**p-value**for the test**conf.int**: a**confidence interval**for the mean appropriate to the specified**alternative hypothesis**.**estimate**: the means of the two groups being compared (in the case of**independent t test**) or difference in means (in the case of**paired t test**).

The format of the **R** code to use for getting these values is as follow:

```
# printing the p-value
res$p.value
```

`[1] 0.0132656`

```
# printing the mean
res$estimate
```

```
mean in group Man mean in group Woman
68.98889 52.10000
```

```
# printing the confidence interval
res$conf.int
```

```
[1] 4.029759 29.748019
attr(,"conf.level")
[1] 0.95
```

You can perform unpaired **two-samples t-test**, **online**, without any installation by clicking the following link:

- Compare one-sample mean to a standard known mean
- Compare the means of two independent groups

This analysis has been performed using **R software** (ver. 3.2.4).

The **one-sample Wilcoxon signed rank test** is a non-parametric alternative to **one-sample t-test** when the data cannot be assumed to be normally distributed. It’s used to determine whether the median of the sample is equal to a known standard value (i.e. theoretical value).

Note that, the data should be distributed symmetrically around the median. In other words, there should be roughly the same number of values above and below the median.

Typical research questions are:

- whether the median (\(m\)) of the sample
*is equal*to the theoretical value (\(m_0\))? - whether the median (\(m\)) of the sample
*is less than*to the theoretical value (\(m_0\))? - whether the median (\(m\)) of the sample
*is greater than*to the theoretical value(\(m_0\))?

In statistics, we can define the corresponding *null hypothesis* (\(H_0\)) as follow:

- \(H_0: m = m_0\)
- \(H_0: m \leq m_0\)
- \(H_0: m \geq m_0\)

The corresponding *alternative hypotheses* (\(H_a\)) are as follow:

- \(H_a: m \ne m_0\) (different)
- \(H_a: m > m_0\) (greater)
- \(H_a: m < m_0\) (less)

Note that:

- Hypotheses 1) are called
**two-tailed tests** - Hypotheses 2) and 3) are called
**one-tailed tests**

You can draw R base graphs as described at this link: R base graphs. Here, we’ll use the **ggpubr** R package for an easy ggplot2-based data visualization

- Install the latest version from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

To perform one-sample Wilcoxon-test, the R function **wilcox.test**() can be used as follow:

`wilcox.test(x, mu = 0, alternative = "two.sided")`

**x**: a numeric vector containing your data values**mu**: the theoretical mean/median value. Default is 0 but you can change it.**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use an example data set containing the weight of 10 mice.

We want to know, if the median weight of the mice differs from 25g?

```
set.seed(1234)
my_data <- data.frame(
name = paste0(rep("M_", 10), 1:10),
weight = round(rnorm(10, 20, 2), 1)
)
```

```
# Print the first 10 rows of the data
head(my_data, 10)
```

```
name weight
1 M_1 17.6
2 M_2 20.6
3 M_3 22.2
4 M_4 15.3
5 M_5 20.9
6 M_6 21.0
7 M_7 18.9
8 M_8 18.9
9 M_9 18.9
10 M_10 18.2
```

```
# Statistical summaries of weight
summary(my_data$weight)
```

```
Min. 1st Qu. Median Mean 3rd Qu. Max.
15.30 18.38 18.90 19.25 20.82 22.20
```

**Min.**: the minimum value**1st Qu.**: The first quartile. 25% of values are lower than this.**Median**: the median value. Half the values are lower; half are higher.**3rd Qu.**: the third quartile. 75% of values are higher than this.**Max.**: the maximum value

```
library(ggpubr)
ggboxplot(my_data$weight,
ylab = "Weight (g)", xlab = FALSE,
ggtheme = theme_minimal())
```

We want to know, if the average weight of the mice differs from 25g (two-tailed test)?

```
# One-sample wilcoxon test
res <- wilcox.test(my_data$weight, mu = 25)
# Printing the results
res
```

```
Wilcoxon signed rank test with continuity correction
data: my_data$weight
V = 0, p-value = 0.005793
alternative hypothesis: true location is not equal to 25
```

```
# print only the p-value
res$p.value
```

`[1] 0.005793045`

The **p-value** of the test is 0.005793, which is less than the significance level alpha = 0.05. We can reject the null hypothesis and conclude that the average weight of the mice is significantly different from 25g with a **p-value** = 0.005793.

Note that:

- if you want to test whether the median weight of mice is less than 25g (one-tailed test), type this:

```
wilcox.test(my_data$weight, mu = 25,
alternative = "less")
```

- Or, if you want to test whether the median weight of mice is greater than 25g (one-tailed test), type this:

```
wilcox.test(my_data$weight, mu = 25,
alternative = "greater")
```

This analysis has been performed using **R software** (ver. 3.2.4).

- What is one-sample t-test?
- Research questions and statistical hypotheses
- Formula of one-sample t-test
- Visualize your data and compute one-sample t-test in R
- Install ggpubr R package for data visualization
- R function to compute one-sample t-test
- Import your data into R
- Check your data
- Visualize your data using box plots
- Preleminary test to check one-sample t-test assumptions
- Compute one-sample t-test
- Interpretation of the result
- Access to the values returned by t.test() function

- Online one-sample t-test calculator
- See also
- Infos

Generally, the theoretical mean comes from:

- a previous experiment. For example, compare whether the mean weight of mice differs from 200 mg, a value determined in a previous study.
- or from an experiment where you have control and treatment conditions. If you express your data as “percent of control”, you can test whether the average value of treatment condition differs significantly from 100.

Note that, one-sample t-test can be used only, when the data are normally distributed . This can be checked using **Shapiro-Wilk test** .

Typical research questions are:

- whether the mean (\(m\)) of the sample
*is equal*to the theoretical mean (\(\mu\))? - whether the mean (\(m\)) of the sample
*is less than*the theoretical mean (\(\mu\))? - whether the mean (\(m\)) of the sample
*is greater than*the theoretical mean (\(\mu\))?

In statistics, we can define the corresponding *null hypothesis* (\(H_0\)) as follow:

- \(H_0: m = \mu\)
- \(H_0: m \leq \mu\)
- \(H_0: m \geq \mu\)

The corresponding *alternative hypotheses* (\(H_a\)) are as follow:

- \(H_a: m \ne \mu\) (different)
- \(H_a: m > \mu\) (greater)
- \(H_a: m < \mu\) (less)

Note that:

- Hypotheses 1) are called
**two-tailed tests** - Hypotheses 2) and 3) are called
**one-tailed tests**

The t-statistic can be calculated as follow:

\[ t = \frac{m-\mu}{s/\sqrt{n}} \]

where,

**m**is the sample**mean****n**is the sample**size****s**is the sample**standard deviation**with \(n-1\)**degrees of freedom**- \(\mu\) is the
**theoretical value**

We can compute the p-value corresponding to the absolute value of the **t-test statistics** (|t|) for the **degrees of freedom** (df): \(df = n - 1\).

How to interpret the results?

If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we conclude that the sample mean is significantly different from the theoretical mean.

You can draw R base graps as described at this link: R base graphs. Here, we’ll use the **ggpubr** R package for an easy ggplot2-based data visualization

- Install the latest version from GitHub as follow (recommended):

```
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
```

- Or, install from CRAN as follow:

`install.packages("ggpubr")`

To perform one-sample t-test, the R function **t.test**() can be used as follow:

`t.test(x, mu = 0, alternative = "two.sided")`

**x**: a numeric vector containing your data values**mu**: the theoretical mean. Default is 0 but you can change it.**alternative**: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

**Prepare your data**as specified here: Best practices for preparing your data set for R**Save your data**in an external .txt tab or .csv files**Import your data into R**as follow:

```
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
```

Here, we’ll use an example data set containing the weight of 10 mice.

We want to know, if the average weight of the mice differs from 25g?

```
set.seed(1234)
my_data <- data.frame(
name = paste0(rep("M_", 10), 1:10),
weight = round(rnorm(10, 20, 2), 1)
)
```

```
# Print the first 10 rows of the data
head(my_data, 10)
```

```
name weight
1 M_1 17.6
2 M_2 20.6
3 M_3 22.2
4 M_4 15.3
5 M_5 20.9
6 M_6 21.0
7 M_7 18.9
8 M_8 18.9
9 M_9 18.9
10 M_10 18.2
```

```
# Statistical summaries of weight
summary(my_data$weight)
```

```
Min. 1st Qu. Median Mean 3rd Qu. Max.
15.30 18.38 18.90 19.25 20.82 22.20
```

**Min.**: the minimum value**1st Qu.**: The first quartile. 25% of values are lower than this.**Median**: the median value. Half the values are lower; half are higher.**3rd Qu.**: the third quartile. 75% of values are higher than this.**Max.**: the maximum value

```
library(ggpubr)
ggboxplot(my_data$weight,
ylab = "Weight (g)", xlab = FALSE,
ggtheme = theme_minimal())
```

**Is this a large sample**? - No, because n < 30.- Since the sample size is not large enough (less than 30, central limit theorem), we need to
**check whether the data follow a normal distribution**.

How to check the normality?

Read this article: Normality Test in R.

Briefly, it’s possible to use the **Shapiro-Wilk normality test** and to look at the **normality plot**.

**Shapiro-Wilk test**:- Null hypothesis: the data are normally distributed
- Alternative hypothesis: the data are not normally distributed

`shapiro.test(my_data$weight) # => p-value = 0.6993`

From the output, the p-value is greater than the significance level 0.05 implying that the distribution of the data are not significantly different from normal distribtion. In other words, we can assume the normality.

**Visual inspection**of the data normality using**Q-Q plots**(quantile-quantile plots). Q-Q plot draws the correlation between a given sample and the normal distribution.

```
library("ggpubr")
ggqqplot(my_data$weight, ylab = "Men's weight",
ggtheme = theme_minimal())
```

From the normality plots, we conclude that the data may come from normal distributions.

Note that, if the data are not normally distributed, it’s recommended to use the non parametric one-sample Wilcoxon rank test.

We want to know, if the average weight of the mice differs from 25g (two-tailed test)?

```
# One-sample t-test
res <- t.test(my_data$weight, mu = 25)
# Printing the results
res
```

```
One Sample t-test
data: my_data$weight
t = -9.0783, df = 9, p-value = 7.953e-06
alternative hypothesis: true mean is not equal to 25
95 percent confidence interval:
17.8172 20.6828
sample estimates:
mean of x
19.25
```

In the result above :

**t**is the**t-test statistic**value (t = -9.078),**df**is the degrees of freedom (df= 9),**p-value**is the significance level of the**t-test**(p-value = 7.95310^{-6}).**conf.int**is the**confidence interval**of the mean at 95% (conf.int = [17.8172, 20.6828]);**sample estimates**is he mean value of the sample (mean = 19.25).

Note that:

- if you want to test whether the mean weight of mice is less than 25g (one-tailed test), type this:

```
t.test(my_data$weight, mu = 25,
alternative = "less")
```

- Or, if you want to test whether the mean weight of mice is greater than 25g (one-tailed test), type this:

```
t.test(my_data$weight, mu = 25,
alternative = "greater")
```

The **p-value** of the test is 7.95310^{-6}, which is less than the significance level alpha = 0.05. We can conclude that the mean weight of the mice is significantly different from 25g with a **p-value** = 7.95310^{-6}.

The result of **t.test()** function is a list containing the following components:

**statistic**: the value of the**t test statistics****parameter**: the**degrees of freedom**for the**t test statistics****p.value**: the**p-value**for the test**conf.int**: a**confidence interval**for the mean appropriate to the specified**alternative hypothesis**.**estimate**: the means of the two groups being compared (in the case of**independent t test**) or difference in means (in the case of**paired t test**).

The format of the **R** code to use for getting these values is as follow:

```
# printing the p-value
res$p.value
```

`[1] 7.953383e-06`

```
# printing the mean
res$estimate
```

```
mean of x
19.25
```

```
# printing the confidence interval
res$conf.int
```

```
[1] 17.8172 20.6828
attr(,"conf.level")
[1] 0.95
```

You can perform **one-sample t-test**, **online**, without any installation by clicking the following link:

This analysis has been performed using **R software** (ver. 3.2.4).