# Remaining Analyses Part 16

## Tying up (more) loose ends

Brent and Emma suggested a few things to me many moons ago:

- See if there’s a site-level bare vs. eelgrass effect
- Correct my p-values for multiple comparisons

A few weeks later, I finally did it.

### Bare vs. eelgrass effect

I took my data and broke it up by site. I then conducted an ANOVA at each site testing differences in protein abundance between bare and eelgrass habitats. I wrote out the ANOVA F-statistic, original p-value, and Benjamini-Hochberg corrected value in the following data tables:

I also created a bunch of boxplots, which can be found in this folder. I did not find any site-level effects! I’m honestly glad that there are no site-level effects. That would make my story waaaaay more complicated.

### Correcting for multiple comparisons

According to this handy dandy stats website, doing a bunch of statistical tests can lead to some significant results just by chance. That’s not good. With the Benjamini-Hochberg method, you can set a false discovery rate, then only count p-values that mee that FDR criteria. The FDR should be set ahead of time, before looking at the data. Since this is an exploratory study, I don’t want to set a stringent FDR and miss something. But I also don’t want to set my FDR so high that everything is significant. I settled on 10%.

In R, I used the function `p.adjust`

to take the p-values I got from the ANOVA and modify them with the B-H method. Any p-values less than my FDR are considered significant. For these peptides, I can look at the Tukey HSD p-values, which are already adjusted.

Here’s my revised table. I have 13 peptides that are significant!

My next step is to incorporate all of the peptide abundance information into a succinct figure.