Controlling for multiple comparisons in imaging analysis. Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

Size: px
Start display at page:

Download "Controlling for multiple comparisons in imaging analysis. Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison"

Transcription

1 Controlling for multiple comparisons in imaging analysis Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

2 Motivation Run 100 hypothesis tests on null data using p<0.05? How many significant results will I find (on average)?

3

4 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

5 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

6 Review of hypothesis testing What is H0? What is HA? What are the steps of carrying out a hypothesis test?

7 Review of hypothesis testing What is H0? What is HA? What are the steps of carrying out a hypothesis test?

8 Steps of hypothesis testing

9 Steps of hypothesis testing

10 Steps of hypothesis testing

11 Steps of hypothesis testing What do we compare this area to (p-value)?

12 What does the p-value mean? p = 0.01

13 What does the p-value mean? p = 0.01 If the null distribution is true

14 What does the p-value mean? p = 0.01 If the null distribution is true The probability of observing my statistic (or something more extreme than it) is 0.01

15 What does the p-value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

16 What does the p-value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

17 What does the p-value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

18 1100 total voxels 100 voxels have β=δ 80% power -> 80 voxels detected 1000 voxels have β=0 Interpretation 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active Active Total

19 1100 total voxels 100 voxels have β=δ 80% power -> 80 voxels detected 1000 voxels have β=0 Interpretation 5% type I error -> 50 false positives What we know (test results) Declared active Fail to Declare active Total Non-active Active Total

20 1100 total voxels 100 voxels have β=δ 80% power -> 80 voxels detected 1000 voxels have β=0 Interpretation 5% type I error -> 50 false positives What we don t know (truth) Non-active Active Total Declared active Fail to Declare active Total

21 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active 1000 Active 100 Total 1100

22 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active 1000 Active Total 1100

23 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active 1000 Active 80 (Power) 20 (Type II err.) 100 Total 1100

24 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active Active 80 (Power) 20 (Type II err.) 100 Total 1100

25 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active 50 (Type I err.) 950 (Correct) 1000 Active 80 (Power) 20 (Type II err.) 100 Total 1100

26 Interpretation 1100 total voxels 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives Declared active Fail to Declare active Total Non-active Active Total

27 1100 total voxels Interpretation 100 voxels have signal (null is false) 80% power -> 80 voxels detected 1000 voxels have no signal (null) 5% type I error -> 50 false positives focus is on controlling this number Declared active Fail to Declare active Total Non-active Active Total

28 Implication of type I error If you run enough tests, you ll find something that is significant This doesn t mean it is truly significant If you run 20 tests with a 5% threshold on type I errors, you expect to have at least 1 significant test This would be a false positive

29 Hypothesis Testing in fmri Mass Univariate Modeling Fit a separate model for each voxel Look at images of statistics Apply Threshold

30 Assessing Statistic Images What threshold will show us signal? High Threshold Med. Threshold Low Threshold t > 5.5 t > 3.5 t > 0.5 Good Specificity Poor Power (risk of false negatives) Poor Specificity (risk of false positives) Good Power

31 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

32 Levels of inference Voxel level Cluster level Peak level Set level

33 Voxel-level Inference Retain voxels above a-level threshold u a Gives best spatial specificity The null hyp. at a single voxel can be rejected Statistic values space

34 Voxel-level Inference Retain voxels above a-level threshold u a Gives best spatial specificity The null hyp. at a single voxel can be rejected u a space

35 Voxel-level Inference Retain voxels above a-level threshold u a Gives best spatial specificity The null hyp. at a single voxel can be rejected u a space Significant Voxels No significant Voxels

36 Cluster-level Inference Two step-process Define clusters by arbitrary threshold u clus u clus space

37 Cluster-level Inference Two step-process Define clusters by arbitrary threshold u clus Retain clusters larger than a-level threshold k a u clus space Cluster not significant k a k a Cluster significant

38 Cluster-level Inference Typically better sensitivity Worse spatial specificity The null hyp. of entire cluster is rejected Only means that one or more of voxels in cluster active u clus space Cluster not significant k a k a Cluster significant

39 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors u clus space

40 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors Z 2 Z 3 Z 4 Z5 u clus Z 1 space

41 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors u peak Z 2 Z 3 Z 4 Z5 u clus Z 1 space

42 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors u peak Z 2 Z 3 Z 4 Z5 u clus Z 1 space

43 Set level inference Is there any activation anywhere in the brain? Omnibus hypothesis test of all voxels, simultaneously If significant, we only know there s activation somewhere in the brain

44 Levels of inference Voxel level Cluster level Peak level Set level

45 Questions for you Why do some approaches require 2 thresholds? What thresholding strategy do people typically use?

46 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

47 What error rate should we control? Per comparison error rate? Family wise error rate? False discovery rate?

48 Different types of error rates PCER Per comparison error rate Controlling each voxel at 5% Expect 5% of null voxels will be (mistakenly) deemed active FWER Family wise error rate Controls the probability of any false positives Run 20 NULL group analyses (on 20 data sets) and only 1 analysis will have a significant finding

49 Different types of error rates PCER Per comparison error rate Controlling each voxel at 5% Expect 5% of null voxels will be (mistakenly) deemed active FWER Family wise error rate Controls the probability of any false positives Run 20 NULL group analyses (on 20 data sets) and only 1 analysis will have a significant finding

50 Different types of error rates FDR False discovery rate Of the voxels you deemed significant, what percentage were null

51 FWER FWER vs FDR P(# true null declared active > 1) FDR E (# of true null declared active / # voxels declared active) Declared active Fail to Declare active Total Non-active Active Total

52 FWER FWER vs FDR P(# true null declared active > 1) FDR E (# of true null declared active / # voxels declared active) Declared active Fail to Declare active Total Non-active Active Total

53 False Discovery Rate Illustration: Noise Signal Signal+Noise

54 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

55 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

56 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

57 Considerations with multiple comparisons What statistic you re working with Voxel wise? Cluster wise? What error rate you re controlling Per comparison error rate Family wise error rate False discovery rate

58 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

59 FWER FWER P(# true null declared active > 1) FDR E (# of true null declared active / # voxels declared active) Declared active Fail to Declare active Total Non-active Active Total

60 FWER Correction - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

61 FWER Correction - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

62 FWER Correction - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

63 FWER Correction - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

64 FWER Correction - Bonferroni Can be too conservative Bonferroni assumes all tests are independent fmri data tend to be spatially correlated # of independent tests < # voxels

65 Smooth data How will the Bonferroni correction work with smoothed data? Will false positive rate increase or decrease?

66 FWER Random Field theory Parametric approach to controlling false positives Parametric = there s an equation that will spit out the p-value Beyond the scope of this course Tends to be as conservative as Bonferroni

67 FWER Random Field theory Parametric approach to controlling false positives Parametric = there s an equation that will spit out the p-value Beyond the scope of this course Tends to be as conservative as Bonferroni

68 FWER Random Field theory Parametric approach to controlling false positives Parametric = there s an equation that will spit out the p-value Voxelwise version tends to be as conservative as Bonferroni

69 Another way to control all tests Statistic Magnitude Test Number

70 Another way to control all tests Statistic Magnitude Test Number

71 Another way to control all tests Statistic Magnitude Test Number

72 Another way to control all tests Statistic Magnitude Test Number

73 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

74 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

75 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

76 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

77 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

78 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a

79 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a

80 FWER with max statistic FWER & distribution of maximum FWER = P(FWE) = P(One or more voxels ³ u H o ) = P(Max voxel ³ u H o ) 100(1-a)%ile of max dist n controls FWER FWER = P(Max voxel ³ u a H o ) a u a a

81 FWER MTP Solutions: Random Field Theory Euler Characteristic c u Topological Measure No holes Never more than 1 blob #blobs - #holes At high thresholds, just counts blobs Random Field FWER = P(Max voxel ³ u H o ) = P(One or more blobs H o )» P(c u ³ 1 H o )» E(c u H o ) Threshold Suprathreshold Sets

82 Distribution details Math is hairy! Nichols and Hayasaka 2003 Cao and Worsley 2001 What you need to know Depends on smoothness of your image Must quantify smoothness and it is important to report when using RFT

83 General idea E(c u )» Mathy stuff *Volume/Smoothness We know what the volume is What is smoothness?

84 Smoothness How smooth are the data? Measured by FWHM=[FWHM x, FWHM y, FWHM z ] Starting with white noise smooth with a gaussian How large does the variance of that gaussian need to be such that the smoothness matches your data?

85 RESEL RESolution Element RESEL=FWHM x x FWHM y x FWHM z RESEL count If your voxels were the size of a RESEL, how many are required to fill your volume? 10 voxels, 2.5 voxel FWHM smoothness Þ 4 RESELS

86 voxels FWHM= 2.5 voxels RESEL count=4

87 Note about RESEL count Not the number of independent tests Not the magic bullet for a better Bonferroni Re-expression of volume in terms of smoothness We need it, since it is necessary to calculate our p-values

88 Revisit distribution E(c u )» Mathy stuff *Volume/Smoothness Smoothness is defined in RESELs E(c u ) is our p-value How does a p-value change as volume increases? How does a p-value change as smoothness increases?

89 RFT adapts For larger volumes it is more strict Multiple comparison problem is worse For smoother data it is less strict Multiple comparison problem is less severe

90 Shortcomings of RFT Requires estimating a lot of parameters Random field must be sufficiently smooth If you don t spatially smooth the data enough, RFT doesn t work well I ll cover the Eklund paper later on today!

91 Bonferroni and RFT u RF = 9.87 u Bonf = sig. vox. t 11 Statistic, RF & Bonf. Threshold

92 RFT Voxelwise RFT is rarely used in practice Too conservative Cluster wise RFT is very common We ll learn about cluster stats with permutation testing

93 FYI If you re using RFT, you probably shouldn t lower the cluster forming threshold Assumptions could break down If you really want to lower it, switch to nonparameteric approaches SnPM Randomise

94 Questions for you Why do we use the max statistic for multiple comparison correction? Was this a voxelwise or clusterwise approach?

95 Parametric vs Nonparametric Parametric Assume distribution shape Typically 1 or more parameters must be estimated Nonparametric No assumption on distribution shape Use data to construct distribution Related to bootstrap and jackknife, BUT not the same!!!

96 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

97 Permutation test Generally can be used when the true distribution shape is unknown Data don t follow a normal distribution Generally doesn t control for multiple comparisons Using in conjunction with the max statistic tackles 2 problems Not knowing the structure of the distribution Control FWER

98 Permutation test Generally can be used when the true distribution shape is unknown Data don t follow a normal distribution Generally doesn t control for multiple comparisons Using in conjunction with the max statistic tackles 2 problems Not knowing the structure of the distribution Control FWER

99 Permutation test Generally can be used when the true distribution shape is unknown Data don t follow a normal distribution Generally doesn t control for multiple comparisons Using in conjunction with the max statistic tackles 2 problems Not knowing the structure of the distribution Control FWER

100 Permutation test Without using max statistic So we understand how it generally works With max statistic So we understand how to control FWER

101 Permutation test Parametric methods Assume distribution of statistic under null hypothesis Nonparametric methods Use data to find distribution of statistic under null hypothesis Any statistic! 5% Parametric Null Distribution 5% Nonparametric Null Distribution

102 Permutation Test Toy Example Data from voxel in visual stim. experiment A: Active, flashing checkerboard B: Baseline, fixation 6 blocks, ABABAB Just consider block averages... A B A B A B Null hypothesis H o No experimental effect, A & B labels arbitrary Statistic Mean difference

103 Permutation Test Toy Example Under H o Consider all equivalent relabelings AAABBB ABABAB BAAABB BABBAA AABABB ABABBA BAABAB BBAAAB AABBAB ABBAAB BAABBA BBAABA AABBBA ABBABA BABAAB BBABAA ABAABB ABBBAA BABABA BBBAAA

104 Permutation Test Toy Example Under H o Consider all equivalent relabelings Compute all possible statistic values AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

105 Permutation Test Toy Example Under H o Consider all equivalent relabelings Compute all possible statistic values Find 95%ile of permutation distribution AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

106 Permutation Test Toy Example Under H o Consider all equivalent relabelings Compute all possible statistic values Find 95%ile of permutation distribution AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

107 Permutation Test Toy Example Under H o Consider all equivalent relabelings Compute all possible statistic values Find 95%ile of permutation distribution

108 Small Sample Sizes Permutation test doesn t work well with small sample sizes Possible p-values for previous example: 0.05, 0.1, 0.15, 0.2, etc Tends to be conservative for small sample sizes

109 Permutation Test & Exchangeability Exchangeability is fundamental Def: Distribution of the data unperturbed by permutation Under H 0, exchangeability justifies permuting data Allows us to build permutation distribution

110 What is exchanged? Under the null, what can be swapped? Null data (slope = 0) x y

111 Null data (slope = 0) x y What is exchanged? Under the null, what can be swapped?

112 What is exchanged? Under the null, what can be swapped? Null data (slope = 0) x y

113 Original data y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8

114 Permuted data y 8 y 3 y 2 y 7 y 6 y 1 y 4 y 5 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8

115 When exchangeability doesn t hold Correlated data Temporal correlation Unpermuted Permuted

116 When exchangeability doesn t hold Correlated data Family data Unpermuted Permuted

117 When exchangeability doesn t hold Influential outliers with a continuous covariate Original Data (null) Y Y X Permutation Permuted X

118 When exchangeability does hold Independent subjects Homoscedasticity Heteroscedasticity is fine if you re running a 1- sample t-test (I m sure there are exceptions)

119 Permutation Test & Exchangeability Subjects are exchangeable Under Ho, each subject s A/B labels can be flipped fmri scans are not exchangeable under Ho If no signal, can we permute over time? No, permuting disrupts order, temporal autocorrelation

120 Permutation Test & Exchangeability Two sample t test Compare subjects in group 1 to subjects in group 2 Randomly assign group labels in permutations One sample t test Randomly flip sign of values for some subjects Correlation Randomly reorder subjects in dependent variable

121 Questions for you Why are small sample sizes problematic for permutation testing?

122 Controlling FWER: Permutation Test Parametric methods Assume distribution of max statistic under null hypothesis Nonparametric methods Use data to find distribution of max statistic under null hypothesis Again, any max statistic! 5% Parametric Null Max Distribution 5% Nonparametric Null Max Distribution

123 Permutation Test Other Statistics Collect max distribution To find threshold that controls FWER Consider smoothed variance t statistic To regularize low-df variance estimate

124 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

125 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

126 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

127 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

128 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

129 Max statistic for imaging data 1. Compute your statistics map for original data 2. Shuffle labels and compute statistics map 3. Save the largest statistic over the whole brain 4. Repeat steps 2-3 many times ( ,000) 5. Use distribution of stats over permutations to compute threshold 6. Apply threshold to map from step 1

130 x y x y x y3 Original Data Null Distribution Maximum T Frequency

131 Null Distribution Maximum T Frequency x y1 T= x y2 T= x y3 T= 1.55 Permutation 1

132 Null Distribution Maximum T Frequency x y1 T= x y2 T= x y3 T=1.45 Permutation 2

133 Null Distribution Maximum T Frequency x y1 T= x y2 T= x y3 T= 0.36 Permutation 100

134 Null Distribution Maximum T Frequency x y1 T= x y2 T= x y3 T= 2.25 Permutation 5000 p=0.39 p=0.21 p=0.003

135 Permutation Test Smoothed Variance t Collect max distribution To find threshold that controls FWER Consider smoothed variance t statistic mean difference variance t-statistic

136 Permutation Test Smoothed Variance t Collect max distribution To find threshold that controls FWER Consider smoothed variance t statistic mean difference smoothed variance Smoothed Variance t-statistic

137 Permutation Test Example fmri Study of Working Memory Active subjects, block design Marshuetzet al (2000) Item Recognition Active: View five letters, 2s pause, view probe letter, respond Baseline: View XXXXX, 2s pause, view Y or N, respond Second Level RFX Difference image, A-B constructed for each subject One sample t test UBKDA... Baseline XXXXX... D N yes... no

138 Permutation Test Example Permute! 2 12 = 4,096 ways to flip 12 A/B labels For each, note maximum of t image. Permutation Distribution Maximum t Maximum Intensity Projection Thresholded t

139 u Perm = sig. vox. t 11 Statistic, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Statistic, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Statistic, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

140 u Perm = sig. vox. t 11 Statistic, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Statistic, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Statistic, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

141 u Perm = sig. vox. t 11 Statistic, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Statistic, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Statistic, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

142 u Perm = sig. vox. t 11 Statistic, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Statistic, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Statistic, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

143 Permutation test cluster statistic Two step-process Define clusters by arbitrary threshold u clus u clus space

144 Permutation test cluster statistic Two step-process Define clusters by arbitrary threshold u clus Retain clusters larger than a-level threshold k a u clus space Cluster not significant k a k a Cluster significant

145 Permutation test cluster statistics Cluster size Simply count how many voxels are in the cluster Cluster mass Sum up the statistics in the cluster

146 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

147 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

148 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

149 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

150 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

151 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

152 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold

153 Permutation test cluster statistics 1. Find clusters with original data 2. Permute labels 3. Compute statistics 4. Apply cluster-forming threshold 5. Compute cluster statistics 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many times ( ,000) 8. Use distribution from step 7 to find cluster (size or mass) threshold & apply to step 1

154 Questions for you Why don t permutation tests, alone, fix multiple comparisons? What did we need to use to address multiple comparisons? How are the voxelwise and clusterwise permutation tests set up?

155 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

156 FWER FWER vs FDR P(# true null declared active > 1) FDR E (# of true null declared active / # voxels declared active) Declared active Declared inactive Total Non-active Active Total

157 Controlling FDR Tends to be less conservative than controlling FWER What rate is appropriate? Imagers use 5%...out of habit FDR people I ve met outside of imaging often use higher values Decide before you threshold your data Don t choose what makes your data look good

158 Benjamini & Hochberg Procedure Select desired limit a on FDR Order p-values, p (1) p (2)... p (v) Let r be largest i such that p (i) i/v a 1 Reject all hypotheses corresponding to p (1),..., p (r). p-value p (i) 0 0 i/v 1

159 Benjamini & Hochberg Procedure Select desired limit a on FDR Order p-values, p (1) p (2)... p (v) Let r be largest i such that p (i) i/v a 1 Reject all hypotheses corresponding to p (1),..., p (r). p-value p (i) 0 0 i/v i/v a 1

160 FDR Example FWER Perm. Thresh. = voxels FDR Threshold = ,073 voxels

161 Where we re going Review of hypothesis testing introduce multiple testing problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family-wise error control approaches (parametric/nonparametric) FDR Relating all of this to SPM output

162 Guess what? Now you have the knowledge needed to understand a huge/daunting table SPM spits out! Let s do it

163 SPM output

164 Which level of inference is missing? SPM output

165 what exciting conclusion can we make? SPM output

166 Recall: FWE correction shown earlier was super conservative compared to FDR. Why does this look different? SPM output

167 SPM output What do you think K E is? What statistic does the p-value correspond to?

168 The uncorrected stat doesn t take the search volume into account SPM output

169 See the note at the bottom? SPM output

170 Do any clusters have more than one peak? SPM output

171 SPM output Last, but not least, you ll use this in lab. This is used to threshold clusters so you can look at only the significant ones

172 SPM output Compare this threshold to the FWE p-values for cluster stats

173 Questions? That s it!

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding NITP, 2010 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods for controlling false positives Familywise

More information

Controlling for mul2ple comparisons in imaging analysis. Where we re going. Where we re going 8/15/16

Controlling for mul2ple comparisons in imaging analysis. Where we re going. Where we re going 8/15/16 Controlling for mul2ple comparisons in imaging analysis Wednesday, Lecture 2 Jeane?e Mumford University of Wisconsin - Madison Where we re going Review of hypothesis tes2ng introduce mul2ple tes2ng problem

More information

Controlling for mul-ple comparisons in imaging analysis. Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison

Controlling for mul-ple comparisons in imaging analysis. Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison Controlling for mul-ple comparisons in imaging analysis Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem

More information

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding NITP, 2009 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods for controlling false positives Familywise

More information

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding UCLA Advanced NeuroImaging Summer School, 2007 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods

More information

Contents. comparison. Multiple comparison problem. Recap & Introduction Inference & multiple. «Take home» message. DISCOS SPM course, CRC, Liège, 2009

Contents. comparison. Multiple comparison problem. Recap & Introduction Inference & multiple. «Take home» message. DISCOS SPM course, CRC, Liège, 2009 DISCOS SPM course, CRC, Liège, 2009 Contents Multiple comparison problem Recap & Introduction Inference & multiple comparison «Take home» message C. Phillips, Centre de Recherches du Cyclotron, ULg, Belgium

More information

New and best-practice approaches to thresholding

New and best-practice approaches to thresholding New and best-practice approaches to thresholding Thomas Nichols, Ph.D. Department of Statistics & Warwick Manufacturing Group University of Warwick FIL SPM Course 17 May, 2012 1 Overview Why threshold?

More information

Statistical Methods in functional MRI. Standard Analysis. Data Processing Pipeline. Multiple Comparisons Problem. Multiple Comparisons Problem

Statistical Methods in functional MRI. Standard Analysis. Data Processing Pipeline. Multiple Comparisons Problem. Multiple Comparisons Problem Statistical Methods in fnctional MRI Lectre 7: Mltiple Comparisons 04/3/13 Martin Lindqist Department of Biostatistics Johns Hopkins University Data Processing Pipeline Standard Analysis Data Acqisition

More information

Extending the GLM. Outline. Mixed effects motivation Evaluating mixed effects methods Three methods. Conclusions. Overview

Extending the GLM. Outline. Mixed effects motivation Evaluating mixed effects methods Three methods. Conclusions. Overview Extending the GLM So far, we have considered the GLM for one run in one subject The same logic can be applied to multiple runs and multiple subjects GLM Stats For any given region, we can evaluate the

More information

Multiple comparisons problem and solutions

Multiple comparisons problem and solutions Multiple comparisons problem and solutions James M. Kilner http://sites.google.com/site/kilnerlab/home What is the multiple comparisons problem How can it be avoided Ways to correct for the multiple comparisons

More information

Correction for multiple comparisons. Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh

Correction for multiple comparisons. Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh Correction for multiple comparisons Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh Overview Multiple comparisons correction procedures Levels of inferences (set, cluster, voxel) Circularity issues

More information

Introductory Concepts for Voxel-Based Statistical Analysis

Introductory Concepts for Voxel-Based Statistical Analysis Introductory Concepts for Voxel-Based Statistical Analysis John Kornak University of California, San Francisco Department of Radiology and Biomedical Imaging Department of Epidemiology and Biostatistics

More information

Medical Image Analysis

Medical Image Analysis Medical Image Analysis Instructor: Moo K. Chung mchung@stat.wisc.edu Lecture 10. Multiple Comparisons March 06, 2007 This lecture will show you how to construct P-value maps fmri Multiple Comparisons 4-Dimensional

More information

Linear Models in Medical Imaging. John Kornak MI square February 22, 2011

Linear Models in Medical Imaging. John Kornak MI square February 22, 2011 Linear Models in Medical Imaging John Kornak MI square February 22, 2011 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Linear Models in Medical Imaging. John Kornak MI square February 19, 2013

Linear Models in Medical Imaging. John Kornak MI square February 19, 2013 Linear Models in Medical Imaging John Kornak MI square February 19, 2013 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Linear Models in Medical Imaging. John Kornak MI square February 23, 2010

Linear Models in Medical Imaging. John Kornak MI square February 23, 2010 Linear Models in Medical Imaging John Kornak MI square February 23, 2010 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

7/15/2016 ARE YOUR ANALYSES TOO WHY IS YOUR ANALYSIS PARAMETRIC? PARAMETRIC? That s not Normal!

7/15/2016 ARE YOUR ANALYSES TOO WHY IS YOUR ANALYSIS PARAMETRIC? PARAMETRIC? That s not Normal! ARE YOUR ANALYSES TOO PARAMETRIC? That s not Normal! Martin M Monti http://montilab.psych.ucla.edu WHY IS YOUR ANALYSIS PARAMETRIC? i. Optimal power (defined as the probability to detect a real difference)

More information

Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples

Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples Human Brain Mapping 15:1 25(2001) Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples Thomas E. Nichols 1 and Andrew P. Holmes 2,3 * 1 Department of Biostatistics, University

More information

Power analysis. Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison

Power analysis. Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Power analysis Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Power Analysis-Why? To answer the question. How many subjects do I need for my study? How many runs per subject should

More information

Statistical inference on images

Statistical inference on images 7 Statistical inference on images The goal of statistical inference is to make decisions based on our data, while accounting for uncertainty due to noise in the data. From a broad perspective, statistical

More information

Linear Models in Medical Imaging. John Kornak MI square February 21, 2012

Linear Models in Medical Imaging. John Kornak MI square February 21, 2012 Linear Models in Medical Imaging John Kornak MI square February 21, 2012 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols 1 of 14 3/30/2005 9:24 PM SnPM A Worked fmri Example SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols This page... introduction example data background design setup computation viewing results

More information

Advances in FDR for fmri -p.1

Advances in FDR for fmri -p.1 Advances in FDR for fmri Ruth Heller Department of Statistics, University of Pennsylvania Joint work with: Yoav Benjamini, Nava Rubin, Damian Stanley, Daniel Yekutieli, Yulia Golland, Rafael Malach Advances

More information

Statistical Methods in functional MRI. False Discovery Rate. Issues with FWER. Lecture 7.2: Multiple Comparisons ( ) 04/25/13

Statistical Methods in functional MRI. False Discovery Rate. Issues with FWER. Lecture 7.2: Multiple Comparisons ( ) 04/25/13 Statistical Methods in functional MRI Lecture 7.2: Multiple Comparisons 04/25/13 Martin Lindquist Department of iostatistics Johns Hopkins University Issues with FWER Methods that control the FWER (onferroni,

More information

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd CHAPTER 2 Morphometry on rodent brains A.E.H. Scheenstra J. Dijkstra L. van der Weerd This chapter was adapted from: Volumetry and other quantitative measurements to assess the rodent brain, In vivo NMR

More information

First-level fmri modeling

First-level fmri modeling First-level fmri modeling Monday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison What do we need to remember from the last lecture? What is the general structure of a t- statistic? How about

More information

Group Sta*s*cs in MEG/EEG

Group Sta*s*cs in MEG/EEG Group Sta*s*cs in MEG/EEG Will Woods NIF Fellow Brain and Psychological Sciences Research Centre Swinburne University of Technology A Cau*onary tale. A Cau*onary tale. A Cau*onary tale. Overview Introduc*on

More information

A Non-Parametric Approach

A Non-Parametric Approach Andrew P. Holmes. Ph.D., 1994. Chapter Six A Non-Parametric Approach In this chapter, a non-parametric approach to assessing functional mapping experiments is presented. A multiple comparisons randomisation

More information

The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis

The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis Brent R. Logan and Daniel B. Rowe, Division of Biostatistics and Department of Biophysics Division of Biostatistics

More information

Group (Level 2) fmri Data Analysis - Lab 4

Group (Level 2) fmri Data Analysis - Lab 4 Group (Level 2) fmri Data Analysis - Lab 4 Index Goals of this Lab Before Getting Started The Chosen Ten Checking Data Quality Create a Mean Anatomical of the Group Group Analysis: One-Sample T-Test Examine

More information

Introduction to Neuroimaging Janaina Mourao-Miranda

Introduction to Neuroimaging Janaina Mourao-Miranda Introduction to Neuroimaging Janaina Mourao-Miranda Neuroimaging techniques have changed the way neuroscientists address questions about functional anatomy, especially in relation to behavior and clinical

More information

fmri Basics: Single Subject Analysis

fmri Basics: Single Subject Analysis fmri Basics: Single Subject Analysis This session is intended to give an overview of the basic process of setting up a general linear model for a single subject. This stage of the analysis is also variously

More information

Efficiency and design optimization

Efficiency and design optimization Efficiency and design optimization Tuesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Thanks to Tom Liu for letting me use some of his slides! What is the best way to increase your power?

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 14: Introduction to hypothesis testing (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 10 Hypotheses 2 / 10 Quantifying uncertainty Recall the two key goals of inference:

More information

arxiv: v1 [stat.ap] 1 Jun 2016

arxiv: v1 [stat.ap] 1 Jun 2016 Permutation-based cluster size correction for voxel-based lesion-symptom mapping arxiv:1606.00475v1 [stat.ap] 1 Jun 2016 June 3, 2016 Daniel Mirman a,b,1 Jon-Frederick Landrigan a Spiro Kokolis a Sean

More information

Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates

Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates Supporting Information Appendix Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates Anders Eklund, Thomas Nichols, Hans Knutsson Methods Resting state fmri data Resting

More information

Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review

Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review Thomas Nichols & Satoru Hayasaka Departments of Biostatistics University of Michigan, Ann Arbor, MI 48109, U.S.A.

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

Lab 5: Multi Subject fmri

Lab 5: Multi Subject fmri USA SPM Short Course April 6-8, 2005 Yale University Lab 5: Multi Subject fmri Rik Henson mailto:rik.henson@mrc-cbu.cam.ac.uk MRC Cognition & Brain Sciences Unit, Cambridge, UK CONTENTS Goals Introduction

More information

Quality Checking an fmri Group Result (art_groupcheck)

Quality Checking an fmri Group Result (art_groupcheck) Quality Checking an fmri Group Result (art_groupcheck) Paul Mazaika, Feb. 24, 2009 A statistical parameter map of fmri group analyses relies on the assumptions of the General Linear Model (GLM). The assumptions

More information

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Predictive Analysis: Evaluation and Experimentation. Heejun Kim Predictive Analysis: Evaluation and Experimentation Heejun Kim June 19, 2018 Evaluation and Experimentation Evaluation Metrics Cross-Validation Significance Tests Evaluation Predictive analysis: training

More information

Journal of Articles in Support of The Null Hypothesis

Journal of Articles in Support of The Null Hypothesis Data Preprocessing Martin M. Monti, PhD UCLA Psychology NITP 2016 Typical (task-based) fmri analysis sequence Image Pre-processing Single Subject Analysis Group Analysis Journal of Articles in Support

More information

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Xavier Le Faucheur a, Brani Vidakovic b and Allen Tannenbaum a a School of Electrical and Computer Engineering, b Department of Biomedical

More information

Functional MRI in Clinical Research and Practice Preprocessing

Functional MRI in Clinical Research and Practice Preprocessing Functional MRI in Clinical Research and Practice Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization

More information

Introduction to fmri. Pre-processing

Introduction to fmri. Pre-processing Introduction to fmri Pre-processing Tibor Auer Department of Psychology Research Fellow in MRI Data Types Anatomical data: T 1 -weighted, 3D, 1/subject or session - (ME)MPRAGE/FLASH sequence, undistorted

More information

Fields. J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston. 1 Introduction 2. 2 Testing for the intensity of an activation in SPMs 3

Fields. J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston. 1 Introduction 2. 2 Testing for the intensity of an activation in SPMs 3 Statistical Inference and the Theory of Random Fields J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston Contents 1 Introduction 2 2 Testing for the intensity of an activation in SPMs 3 2.1 Theory : : :

More information

Multiple Linear Regression: Global tests and Multiple Testing

Multiple Linear Regression: Global tests and Multiple Testing Multiple Linear Regression: Global tests and Multiple Testing Author: Nicholas G Reich, Jeff Goldsmith This material is part of the statsteachr project Made available under the Creative Commons Attribution-ShareAlike

More information

Network statistics and thresholding

Network statistics and thresholding Network statistics and thresholding Andrew Zalesky azalesky@unimelb.edu.au HBM Educational Course June 25, 2017 Network thresholding Unthresholded Moderate thresholding Severe thresholding Strong link

More information

EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS

EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS Anders Eklund a,b,c, Thomas Nichols d, Mats Andersson a,c, Hans Knutsson a,c a Department of Biomedical

More information

Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls

Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls A B C Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls well within the distribution for London s streets.

More information

Single Subject Demo Data Instructions 1) click "New" and answer "No" to the "spatially preprocess" question.

Single Subject Demo Data Instructions 1) click New and answer No to the spatially preprocess question. (1) conn - Functional connectivity toolbox v1.0 Single Subject Demo Data Instructions 1) click "New" and answer "No" to the "spatially preprocess" question. 2) in "Basic" enter "1" subject, "6" seconds

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Multiple Comparisons of Treatments vs. a Control (Simulation)

Multiple Comparisons of Treatments vs. a Control (Simulation) Chapter 585 Multiple Comparisons of Treatments vs. a Control (Simulation) Introduction This procedure uses simulation to analyze the power and significance level of two multiple-comparison procedures that

More information

Bias in Resampling-Based Thresholding of Statistical Maps in fmri

Bias in Resampling-Based Thresholding of Statistical Maps in fmri Bias in Resampling-Based Thresholding of Statistical Maps in fmri Ola Friman and Carl-Fredrik Westin Laboratory of Mathematics in Imaging, Department of Radiology Brigham and Women s Hospital, Harvard

More information

FMRI Pre-Processing and Model- Based Statistics

FMRI Pre-Processing and Model- Based Statistics FMRI Pre-Processing and Model- Based Statistics Brief intro to FMRI experiments and analysis FMRI pre-stats image processing Simple Single-Subject Statistics Multi-Level FMRI Analysis Advanced FMRI Analysis

More information

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing Functional Connectivity Preprocessing Geometric distortion Head motion Geometric distortion Head motion EPI Data Are Acquired Serially EPI Data Are Acquired Serially descending 1 EPI Data Are Acquired

More information

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing SPM8 for Basic and Clinical Investigators Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial

More information

Data Visualisation in SPM: An introduction

Data Visualisation in SPM: An introduction Data Visualisation in SPM: An introduction Alexa Morcom Edinburgh SPM course, April 2010 Centre for Cognitive & Neural Systems/ Department of Psychology University of Edinburgh Visualising results remembered

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Data Visualisation in SPM: An introduction

Data Visualisation in SPM: An introduction Data Visualisation in SPM: An introduction Alexa Morcom Edinburgh SPM course, April 2015 SPMmip [-30, 3, -9] 3 Visualising results remembered vs. fixation contrast(s) < < After the results table - what

More information

General Factorial Models

General Factorial Models In Chapter 8 in Oehlert STAT:5201 Week 9 - Lecture 1 1 / 31 It is possible to have many factors in a factorial experiment. We saw some three-way factorials earlier in the DDD book (HW 1 with 3 factors:

More information

General Factorial Models

General Factorial Models In Chapter 8 in Oehlert STAT:5201 Week 9 - Lecture 2 1 / 34 It is possible to have many factors in a factorial experiment. In DDD we saw an example of a 3-factor study with ball size, height, and surface

More information

Pair-Wise Multiple Comparisons (Simulation)

Pair-Wise Multiple Comparisons (Simulation) Chapter 580 Pair-Wise Multiple Comparisons (Simulation) Introduction This procedure uses simulation analyze the power and significance level of three pair-wise multiple-comparison procedures: Tukey-Kramer,

More information

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Zurich SPM Course 2015 Voxel-Based Morphometry Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Examples applications of VBM Many scientifically or clinically interesting

More information

OLS Assumptions and Goodness of Fit

OLS Assumptions and Goodness of Fit OLS Assumptions and Goodness of Fit A little warm-up Assume I am a poor free-throw shooter. To win a contest I can choose to attempt one of the two following challenges: A. Make three out of four free

More information

An improved theoretical P-value for SPMs based on discrete local maxima

An improved theoretical P-value for SPMs based on discrete local maxima An improved theoretical P-value for SPMs based on discrete local maxima K.J. Worsley August 9, 25 Department of Mathematics and Statistics, McGill University, 85 Sherbrooke St. West, Montreal, Québec,

More information

Basic fmri Design and Analysis. Preprocessing

Basic fmri Design and Analysis. Preprocessing Basic fmri Design and Analysis Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial filtering

More information

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of

More information

Pattern Recognition for Neuroimaging Data

Pattern Recognition for Neuroimaging Data Pattern Recognition for Neuroimaging Data Edinburgh, SPM course April 2013 C. Phillips, Cyclotron Research Centre, ULg, Belgium http://www.cyclotron.ulg.ac.be Overview Introduction Univariate & multivariate

More information

Voxel-Based Morphometry & DARTEL. Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group

Voxel-Based Morphometry & DARTEL. Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group Zurich SPM Course 2012 Voxel-Based Morphometry & DARTEL Ged Ridgway, London With thanks to John Ashburner and the FIL Methods Group Aims of computational neuroanatomy * Many interesting and clinically

More information

Chapter 6: Linear Model Selection and Regularization

Chapter 6: Linear Model Selection and Regularization Chapter 6: Linear Model Selection and Regularization As p (the number of predictors) comes close to or exceeds n (the sample size) standard linear regression is faced with problems. The variance of the

More information

Evaluation. Evaluate what? For really large amounts of data... A: Use a validation set.

Evaluation. Evaluate what? For really large amounts of data... A: Use a validation set. Evaluate what? Evaluation Charles Sutton Data Mining and Exploration Spring 2012 Do you want to evaluate a classifier or a learning algorithm? Do you want to predict accuracy or predict which one is better?

More information

HST.583 Functional Magnetic Resonance Imaging: Data Acquisition and Analysis Fall 2006

HST.583 Functional Magnetic Resonance Imaging: Data Acquisition and Analysis Fall 2006 MIT OpenCourseWare http://ocw.mit.edu HST.583 Functional Magnetic Resonance Imaging: Data Acquisition and Analysis Fall 2006 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Supplementary methods

Supplementary methods Supplementary methods This section provides additional technical details on the sample, the applied imaging and analysis steps and methods. Structural imaging Trained radiographers placed all participants

More information

Introduction to hypothesis testing

Introduction to hypothesis testing Introduction to hypothesis testing Mark Johnson Macquarie University Sydney, Australia February 27, 2017 1 / 38 Outline Introduction Hypothesis tests and confidence intervals Classical hypothesis tests

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

We can see that some anatomical details are lost after aligning and averaging brains, especially on the cortical level.

We can see that some anatomical details are lost after aligning and averaging brains, especially on the cortical level. Homework 3 - Model answer Background: Listen to the lecture on Data formats (Voxel and affine transformation matrices, nifti formats) and group analysis (Group Analysis, Anatomical Normalization, Multiple

More information

TOPOLOGICAL INFERENCE FOR EEG AND MEG 1. BY JAMES M. KILNER AND KARL J. FRISTON University College London

TOPOLOGICAL INFERENCE FOR EEG AND MEG 1. BY JAMES M. KILNER AND KARL J. FRISTON University College London The Annals of Applied Statistics 2010, Vol. 4, No. 3, 1272 1290 DOI: 10.1214/10-AOAS337 Institute of Mathematical Statistics, 2010 TOPOLOGICAL INFERENCE FOR EEG AND MEG 1 BY JAMES M. KILNER AND KARL J.

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Surface-based Analysis: Inter-subject Registration and Smoothing

Surface-based Analysis: Inter-subject Registration and Smoothing Surface-based Analysis: Inter-subject Registration and Smoothing Outline Exploratory Spatial Analysis Coordinate Systems 3D (Volumetric) 2D (Surface-based) Inter-subject registration Volume-based Surface-based

More information

Supplementary Figure 1. Decoding results broken down for different ROIs

Supplementary Figure 1. Decoding results broken down for different ROIs Supplementary Figure 1 Decoding results broken down for different ROIs Decoding results for areas V1, V2, V3, and V1 V3 combined. (a) Decoded and presented orientations are strongly correlated in areas

More information

Fmri Spatial Processing

Fmri Spatial Processing Educational Course: Fmri Spatial Processing Ray Razlighi Jun. 8, 2014 Spatial Processing Spatial Re-alignment Geometric distortion correction Spatial Normalization Smoothing Why, When, How, Which Why is

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

Lab #9: ANOVA and TUKEY tests

Lab #9: ANOVA and TUKEY tests Lab #9: ANOVA and TUKEY tests Objectives: 1. Column manipulation in SAS 2. Analysis of variance 3. Tukey test 4. Least Significant Difference test 5. Analysis of variance with PROC GLM 6. Levene test for

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

Building Better Parametric Cost Models

Building Better Parametric Cost Models Building Better Parametric Cost Models Based on the PMI PMBOK Guide Fourth Edition 37 IPDI has been reviewed and approved as a provider of project management training by the Project Management Institute

More information

the PyHRF package P. Ciuciu1,2 and T. Vincent1,2 Methods meeting at Neurospin 1: CEA/NeuroSpin/LNAO

the PyHRF package P. Ciuciu1,2 and T. Vincent1,2 Methods meeting at Neurospin 1: CEA/NeuroSpin/LNAO Joint detection-estimation of brain activity from fmri time series: the PyHRF package Methods meeting at Neurospin P. Ciuciu1,2 and T. Vincent1,2 philippe.ciuciu@cea.fr 1: CEA/NeuroSpin/LNAO www.lnao.fr

More information

9.2 Types of Errors in Hypothesis testing

9.2 Types of Errors in Hypothesis testing 9.2 Types of Errors in Hypothesis testing 1 Mistakes we could make As I mentioned, when we take a sample we won t be 100% sure of something because we do not take a census (we only look at information

More information

Estimation of Item Response Models

Estimation of Item Response Models Estimation of Item Response Models Lecture #5 ICPSR Item Response Theory Workshop Lecture #5: 1of 39 The Big Picture of Estimation ESTIMATOR = Maximum Likelihood; Mplus Any questions? answers Lecture #5:

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Section 2.3: Simple Linear Regression: Predictions and Inference

Section 2.3: Simple Linear Regression: Predictions and Inference Section 2.3: Simple Linear Regression: Predictions and Inference Jared S. Murray The University of Texas at Austin McCombs School of Business Suggested reading: OpenIntro Statistics, Chapter 7.4 1 Simple

More information

Economics Nonparametric Econometrics

Economics Nonparametric Econometrics Economics 217 - Nonparametric Econometrics Topics covered in this lecture Introduction to the nonparametric model The role of bandwidth Choice of smoothing function R commands for nonparametric models

More information

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3 CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3 What I have in mind for our last programming project is to do something with either graphical models or random sampling. A few ideas

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Recap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach

Recap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach Truth Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 2.04.205 Discriminative Approaches (5 weeks)

More information

Decoding analyses. Neuroimaging: Pattern Analysis 2017

Decoding analyses. Neuroimaging: Pattern Analysis 2017 Decoding analyses Neuroimaging: Pattern Analysis 2017 Scope Conceptual explanation of decoding/machine learning (ML) and related concepts; Not about mathematical basis of ML algorithms! Learning goals

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Goals of the Lecture. SOC6078 Advanced Statistics: 9. Generalized Additive Models. Limitations of the Multiple Nonparametric Models (2)

Goals of the Lecture. SOC6078 Advanced Statistics: 9. Generalized Additive Models. Limitations of the Multiple Nonparametric Models (2) SOC6078 Advanced Statistics: 9. Generalized Additive Models Robert Andersen Department of Sociology University of Toronto Goals of the Lecture Introduce Additive Models Explain how they extend from simple

More information