Controlling for mul-ple comparisons in imaging analysis. Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison

Size: px
Start display at page:

Download "Controlling for mul-ple comparisons in imaging analysis. Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison"

Transcription

1 Controlling for mul-ple comparisons in imaging analysis Wednesday, Lecture 2 Jeane:e Mumford University of Wisconsin - Madison

2 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

3 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

4 Review of hypothesis tes-ng What is H0? What is HA? What are the steps of carrying out a hypothesis test?

5 Review of hypothesis tes-ng What is H0? What is HA? What are the steps of carrying out a hypothesis test?

6 Steps of hypothesis tes-ng

7 Steps of hypothesis tes-ng

8 Steps of hypothesis tes-ng

9 Steps of hypothesis tes-ng What do we compare this area to (p- value)?

10 What does the p- value mean? p = 0.01

11 What does the p- value mean? p = 0.01 If the null distribu-on is true

12 What does the p- value mean? p = 0.01 If the null distribu-on is true The probability of observing my sta-s-c (or something more extreme than it) is 0.01

13 What does the p- value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

14 What does the p- value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

15 What does the p- value threshold We choose 0.05 imply? Less than 0.05 and we reject the null hypothesis Greater than 0.05 and we fail to reject the null hypothesis

16 Type I error Assuming the null is true, the probability that we reject the null

17 Type I error Assuming the null is true, the probability that we reject the null 5% of the -me, we ll have a false posi-ve

18 1100 total voxels 100 voxels have β=δ 80% power - > 80 voxels detected 1000 voxels have β=0 Interpreta-on 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

19 1100 total voxels 100 voxels have β=δ 80% power - > 80 voxels detected 1000 voxels have β=0 Interpreta-on 5% type I error - > 50 false posi-ves What we know (test results) Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

20 1100 total voxels 100 voxels have β=δ 80% power - > 80 voxels detected 1000 voxels have β=0 Interpreta-on 5% type I error - > 50 false posi-ves What we don t know (truth) Non- ac-ve Ac-ve Total Declared ac-ve Declared inac-ve Total

21 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve 1000 Ac-ve 100 Total 1100

22 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve 1000 Ac-ve Total 1100

23 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve 1000 Ac-ve 80 (Power) 20 (Type II err.) 100 Total 1100

24 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve 80 (Power) 20 (Type II err.) 100 Total 1100

25 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve 50 (Type I err.) 950 (Correct) 1000 Ac-ve 80 (Power) 20 (Type II err.) 100 Total 1100

26 Interpreta-on 1100 total voxels 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

27 1100 total voxels Interpreta-on 100 voxels have signal (null is false) 80% power - > 80 voxels detected 1000 voxels have no signal (null) 5% type I error - > 50 false posi-ves focus is on controlling this number Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

28 Implica-on of type I error If you run enough tests, you ll find something that is significant This doesn t mean it is truly significant If you run 20 tests with a 5% threshold on type I errors, you expect to have at least 1 significant test This would be a false posi-ve

29 Hypothesis Tes-ng in fmri Mass Univariate Modeling Fit a separate model for each voxel Look at images of sta-s-cs Apply Threshold

30 Assessing Sta-s-c Images What threshold will show us signal? High Threshold Med. Threshold Low Threshold t > 5.5 t > 3.5 t > 0.5 Good Specificity Poor Power (risk of false negatives) Poor Specificity (risk of false positives) Good Power

31 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

32 Levels of inference Voxel level Cluster level Peak level Set level

33 Voxel- level Inference Retain voxels above α- level threshold u α Gives best spa-al specificity The null hyp. at a single voxel can be rejected Statistic values space

34 Voxel- level Inference Retain voxels above α- level threshold u α Gives best spa-al specificity The null hyp. at a single voxel can be rejected u α space

35 Voxel- level Inference Retain voxels above α- level threshold u α Gives best spa-al specificity The null hyp. at a single voxel can be rejected u α space Significant Voxels No significant Voxels

36 Cluster- level Inference Two step- process Define clusters by arbitrary threshold u clus u clus space

37 Cluster- level Inference Two step- process Define clusters by arbitrary threshold u clus Retain clusters larger than α- level threshold k α u clus space Cluster not significant k α k α Cluster significant

38 Cluster- level Inference Typically be:er sensi-vity Worse spa-al specificity The null hyp. of en-re cluster is rejected Only means that one or more of voxels in cluster ac-ve u clus space Cluster not significant k α k α Cluster significant

39 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors u clus space

40 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors Z 4 Z 2 Z 3 u clus Z 1 Z 5 space

41 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors Z 4 u peak Z u 1 clus Z 2 Z 3 Z 5 space

42 Peak level inference Again start with a cluster forming threshold Instead of cluster size, focus on peak height Similarly to cluster level inference, significance applies to a set of voxels The peak and its neighbors Z 4 u peak Z u 1 clus Z 2 Z 3 Z 5 space

43 Set level inference Is there any ac-va-on anywhere in the brain? Omnibus hypothesis test of all voxels, simultaneously If significant, we only know there s ac-va-on somewhere in the brain

44 Levels of inference Voxel level Cluster level Peak level Set level

45 Ques-ons for you Why do some approaches require 2 thresholds? What thresholding strategy do people typically use?

46 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

47 What error rate should we control? Per comparison error rate? Family wise error rate? False discovery rate?

48 Different types of error rates PCER Per comparison error rate Controlling each voxel at 5% Expect 5% of null voxels will be (mistakenly) deemed ac-ve FWER Family wise error rate Controls the probability of any false posi-ves Run 20 NULL group analyses (on 20 data sets) and only 1 analysis will have a significant finding

49 Different types of error rates PCER Per comparison error rate Controlling each voxel at 5% Expect 5% of null voxels will be (mistakenly) deemed ac-ve FWER Family wise error rate Controls the probability of any false posi-ves Run 20 NULL group analyses (on 20 data sets) and only 1 analysis will have a significant finding

50 Different types of error rates FDR False discovery rate Of the voxels you deemed significant, what percentage were null

51 FWER FWER vs FDR P(# true null declared ac-ve > 1) FDR E (# of true null declared ac-ve / # voxels declared ac-ve) Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

52 FWER FWER vs FDR P(# true null declared ac-ve > 1) FDR E (# of true null declared ac-ve / # voxels declared ac-ve) Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

53 False Discovery Rate Illustra-on: Noise Signal Signal+Noise

54 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

55 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

56 Control of Per Comparison Rate at 10% 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Control of Familywise Error Rate at 10% Occurrence of Familywise Error FWE Control of False Discovery Rate at 10% 6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7% Percentage of Activated Pixels that are False Positives

57 Considera-ons with mul-ple comparisons What sta-s-c you re working with Voxel wise? Cluster wise? What error rate you re controlling Per comparison error rate Family wise error rate False discovery rate

58 Correlated data Images typically have correlated voxels # of false posi-ves = 0.05 x (# of independent tests) Extreme example Data are smoothed so much all voxels are iden-cal Only 1 out of 20 data sets would have a false posi-ve

59 Correlated data Images typically have correlated voxels # of false posi-ves = 0.05 x (# of independent tests) Extreme example Data are smoothed so much all voxels are iden-cal Only 1 out of 20 data sets would have a false posi-ve

60 Correlated data Coun-ng false posi-ves becomes tricky, since you don t know the number of independent things

61 When data are not correlated P- values computed from simulated null data

62 When data are not correlated Thresholded p< > 4.7% are false posi-ves

63 Correlated data Coun-ng false posi-ves becomes tricky, since you don t know the number of independent things

64 Same demo, with smoothed data Thresholded p- value map - > 4.2% are FP

65 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

66 FWER FWER P(# true null declared ac-ve > 1) FDR E (# of true null declared ac-ve / # voxels declared ac-ve) Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

67 FWER Correc-on - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

68 FWER Correc-on - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

69 FWER Correc-on - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

70 FWER Correc-on - Bonferroni Based on the Bonferroni inequality P (E 1 or E 2 or...e n ) apple nx i=1 P (E i ) If P (Y i passes H 0 ) apple /n then nx P (some Y i passes H 0 ) apple P (Y i passes H 0 ) apple i=1 For 100,000 voxels = 0.05/100, 000 =

71 FWER Correc-on - Bonferroni Can be too conserva-ve Bonferroni assumes all tests are independent fmri data tend to be spa-ally correlated # of independent tests < # voxels

72 Smooth data How will the Bonferroni correc-on work with smoothed data? Will false posi-ve rate increase or decrease?

73 Ques-ons Why doesn t Bonferroni work well with our imaging data? Why does smoothness make mul-ple comparison correc-on more tricky?

74 FWER Random Field theory Parametric approach to controlling false posi-ves Parametric = there s an equa-on that will spit out the p- value Beyond the scope of this course Tends to be as conserva-ve as Bonferroni

75 FWER Random Field theory Parametric approach to controlling false posi-ves Parametric = there s an equa-on that will spit out the p- value Beyond the scope of this course Tends to be as conserva-ve as Bonferroni

76 FWER Random Field theory Parametric approach to controlling false posi-ves Parametric = there s an equa-on that will spit out the p- value Voxelwise version tends to be as conserva-ve as Bonferroni

77 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

78 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

79 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

80 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

81 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

82 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α

83 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α

84 FWER with max sta-s-c FWER & distribu-on of maximum FWER = P(FWE) = P(One or more voxels u H o ) = P(Max voxel u H o ) 100(1- α)%ile of max dist n controls FWER FWER = P(Max voxel u α H o ) α u α α

85 FWER MTP Solu-ons: Random Field Theory Euler Characteris-c χ u Topological Measure No holes Never more than 1 blob #blobs - #holes At high thresholds, just counts blobs Random Field FWER = P(Max voxel u H o ) = P(One or more blobs H o ) P(χ u 1 H o ) E(χ u H o ) Threshold Suprathreshold Sets

86 Distribution details Math is hairy! Nichols and Hayasaka 2003 Cao and Worsley 2001 What you need to know Depends on smoothness of your image Must quantify smoothness and it is important to report when using RFT

87 General idea E(χ u ) Mathy stuff *Volume/Smoothness We know what the volume is What is smoothness?

88 Smoothness How smooth are the data? Measured by FWHM=[FWHM x, FWHM y, FWHM z ] Starting with white noise smooth with a gaussian How large does the variance of that gaussian need to be such that the smoothness matches your data?

89 RESEL RESolution Element RESEL=FWHM x x FWHM y x FWHM z RESEL count If your voxels were the size of a RESEL, how many are required to fill your volume? 10 voxels, 2.5 voxel FWHM smoothness 4 RESELS

90 voxels FWHM= 2.5 voxels RESEL count=4

91 Note about RESEL count Not the number of independent tests Not the magic bullet for a better Bonferroni Re-expression of volume in terms of smoothness We need it, since it is necessary to calculate our p-values

92 Revisit distribution E(χ u ) Mathy stuff *Volume/Smoothness Smoothness is defined in RESELs E(χ u ) is our p-value How does a p-value change as volume increases? How does a p-value change as smoothness increases?

93 RFT adapts For larger volumes it is more strict Multiple comparison problem is worse For smoother data it is less strict Multiple comparison problem is less severe

94 Shortcomings of RFT Requires estimating a lot of parameters Random field must be sufficiently smooth If you don t spatially smooth the data enough, RFT doesn t work well I ll cover the Eklund paper later on today!

95 Bonferroni and RFT u RF = 9.87 u Bonf = sig. vox. t 11 Sta-s-c, RF & Bonf. Threshold

96 RFT Voxelwise RFT is rarely used in prac-ce Too conserva-ve Cluster wise RFT is very common We ll learn about cluster stats with permuta-on tes-ng

97 FYI If you re using RFT, you probably shouldn t lower the cluster forming threshold Assump-ons could break down If you really want to lower it, switch to nonparameteric approaches SnPM Randomise

98 Ques-ons for you Why do we use the max sta-s-c for mul-ple comparison correc-on? Was this a voxelwise or clusterwise approach?

99 Parametric vs Nonparametric Parametric Assume distribu-on shape Typically 1 or more parameters must be es-mated Nonparametric No assump-on on distribu-on shape Use data to construct distribu-on Related to bootstrap and jackknife, BUT not the same!!!

100 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

101 Permuta-on test Generally can be used when the true distribu-on shape is unknown Data don t follow a normal distribu-on Generally doesn t control for mul-ple comparisons Using in conjunc-on with the max sta-s-c tackles 2 problems Not knowing the structure of the distribu-on Control FWER

102 Permuta-on test Generally can be used when the true distribu-on shape is unknown Data don t follow a normal distribu-on Generally doesn t control for mul-ple comparisons Using in conjunc-on with the max sta-s-c tackles 2 problems Not knowing the structure of the distribu-on Control FWER

103 Permuta-on test Generally can be used when the true distribu-on shape is unknown Data don t follow a normal distribu-on Generally doesn t control for mul-ple comparisons Using in conjunc-on with the max sta-s-c tackles 2 problems Not knowing the structure of the distribu-on Control FWER

104 Permuta-on test Without using max sta-s-c So we understand how it generally works With max sta-s-c So we understand how to control FWER

105 Permuta-on test Parametric methods Assume distribu-on of sta-s-c under null hypothesis Nonparametric methods Use data to find distribu-on of sta-s-c under null hypothesis Any sta-s-c! 5% Parametric Null Distribu-on 5% Nonparametric Null Distribu-on

106 Permuta-on Test Toy Example Data from voxel in visual s-m. experiment A: Ac-ve, flashing checkerboard B: Baseline, fixa-on 6 blocks, ABABAB Just consider block averages... A B A B A B Null hypothesis H o No experimental effect, A & B labels arbitrary Sta-s-c Mean difference

107 Permuta-on Test Toy Example Under H o Consider all equivalent relabelings AAABBB ABABAB BAAABB BABBAA AABABB ABABBA BAABAB BBAAAB AABBAB ABBAAB BAABBA BBAABA AABBBA ABBABA BABAAB BBABAA ABAABB ABBBAA BABABA BBBAAA

108 Permuta-on Test Toy Example Under H o Consider all equivalent relabelings Compute all possible sta-s-c values AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

109 Permuta-on Test Toy Example Under H o Consider all equivalent relabelings Compute all possible sta-s-c values Find 95%ile of permuta-on distribu-on AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

110 Permuta-on Test Toy Example Under H o Consider all equivalent relabelings Compute all possible sta-s-c values Find 95%ile of permuta-on distribu-on AAABBB 4.82 ABABAB 9.45 BAAABB BABBAA AABABB ABABBA 6.97 BAABAB 1.10 BBAAAB 3.15 AABBAB ABBAAB 1.38 BAABBA BBAABA 0.67 AABBBA ABBABA BABAAB BBABAA 3.25 ABAABB 6.86 ABBBAA 1.48 BABABA BBBAAA -4.82

111 Permuta-on Test Toy Example Under H o Consider all equivalent relabelings Compute all possible sta-s-c values Find 95%ile of permuta-on distribu-on

112 Small Sample Sizes Permutation test doesn t work well with small sample sizes Possible p-values for previous example: 0.05, 0.1, 0.15, 0.2, etc Tends to be conservative for small sample sizes

113 Permuta-on Test & Exchangeability Exchangeability is fundamental Def: Distribu-on of the data unperturbed by permuta-on Under H 0, exchangeability jus-fies permu-ng data Allows us to build permuta-on distribu-on

114 Permuta-on Test & Exchangeability Subjects are exchangeable Under Ho, each subject s A/B labels can be flipped fmri scans are not exchangeable under Ho If no signal, can we permute over -me? No, permu-ng disrupts order, temporal autocorrela-on

115 Permuta-on Test & Exchangeability Two sample t test Compare subjects in group 1 to subjects in group 2 Randomly assign group labels in permuta-ons One sample t test Randomly flip sign of values for some subjects

116 Ques-ons for you What is permuted for a 1- sample t- test? What is permuted for a 2- sample t- test? What is permuted for a correla-on? Why are small sample sizes problema-c for permuta-on tes-ng?

117 Controlling FWER: Permuta-on Test Parametric methods Assume distribu-on of max sta-s-c under null hypothesis Nonparametric methods Use data to find distribu-on of max sta-s-c under null hypothesis Again, any max sta-s-c! 5% Parametric Null Max Distribu-on 5% Nonparametric Null Max Distribu-on

118 Permuta-on Test Other Sta-s-cs Collect max distribu-on To find threshold that controls FWER Consider smoothed variance t sta-s-c To regularize low- df variance es-mate

119 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

120 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

121 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

122 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

123 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

124 Max sta-s-c for imaging data 1. Compute your sta-s-cs map for original data 2. Shuffle labels and compute sta-s-cs map 3. Save the largest sta-s-c over the whole brain 4. Repeat steps 2-3 many -mes ( ,000) 5. Use distribu-on of stats over permuta-ons to compute threshold 6. Apply threshold to map from step 1

125 Permuta-on Test Smoothed Variance t Collect max distribu-on To find threshold that controls FWER Consider smoothed variance t sta-s-c mean difference variance t-statistic

126 Permuta-on Test Smoothed Variance t Collect max distribu-on To find threshold that controls FWER Consider smoothed variance t sta-s-c mean difference smoothed variance Smoothed Variance t-statistic

127 Permuta-on Test Example fmri Study of Working Memory 12 subjects, block design Marshuetz et al (2000) Item Recogni-on Ac-ve: View five le:ers, 2s pause, view probe le:er, respond Baseline: View XXXXX, 2s pause, view Y or N, respond Second Level RFX Difference image, A- B constructed for each subject One sample t test Active D UBKDA yes Baseline N XXXXX no

128 Permuta-on Test Example Permute! 2 12 = 4,096 ways to flip 12 A/B labels For each, note maximum of t image. Permuta-on Distribu-on Maximum t Maximum Intensity Projec-on Thresholded t

129 u Perm = sig. vox. t 11 Sta-s-c, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Sta-s-c, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Sta-s-c, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

130 u Perm = sig. vox. t 11 Sta-s-c, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Sta-s-c, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Sta-s-c, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

131 u Perm = sig. vox. t 11 Sta-s-c, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Sta-s-c, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Sta-s-c, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

132 u Perm = sig. vox. t 11 Sta-s-c, Nonparametric Threshold u RF = 9.87 u Bonf = sig. vox. t 11 Sta-s-c, RF & Bonf. Threshold RFT threshold is conservative (not smooth enough, d.f. too small) 378 sig. vox. Smoothed Variance t Sta-s-c, Nonparametric Threshold Permutation test is more efficient than Bonferroni since it accounts for smoothness Smooth variance is more efficient for small d.f.

133 Permuta-on test cluster sta-s-c Two step- process Define clusters by arbitrary threshold u clus u clus space

134 Permuta-on test cluster sta-s-c Two step- process Define clusters by arbitrary threshold u clus Retain clusters larger than α- level threshold k α u clus space Cluster not significant k α k α Cluster significant

135 Permuta-on test cluster sta-s-cs Cluster size Simply count how many voxels are in the sta-s-c Cluster mass Sum up the sta-s-cs in the cluster

136 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

137 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

138 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

139 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

140 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

141 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

142 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold

143 Permuta-on test cluster sta-s-cs 1. Find clusters with original data 2. Permute labels 3. Compute sta-s-cs 4. Apply cluster- forming threshold 5. Compute cluster sta-s-cs 6. Save largest (cluster size or mass) 7. Repeat steps 2-3 many -mes ( ,000) 8. Use distribu-on from step 7 to find cluster (size or mass) threshold & apply to step 1

144 Ques-ons for you Why don t permuta-on tests, alone, fix mul-ple comparisons? What did we need to use to address mul-ple comparisons? How are the voxelwise and clusterwise permuta-on tests set up?

145 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

146 FWER FWER vs FDR P(# true null declared ac-ve > 1) FDR E (# of true null declared ac-ve / # voxels declared ac-ve) Declared ac-ve Declared inac-ve Total Non- ac-ve Ac-ve Total

147 Controlling FDR Tends to be less conserva-ve than controlling FWER What rate is appropriate? Imagers use 5%...out of habit FDR people I ve met outside of imaging ozen use higher values Decide before you threshold your data Don t choose what makes your data look good

148 Benjamini & Hochberg Procedure Select desired limit α on FDR Order p- values, p (1) p (2)... p (v) Let r be largest i such that p (i) i/v α 1 Reject all hypotheses corresponding to p (1),..., p (r). p-value p (i) 0 0 i/v 1

149 Benjamini & Hochberg Procedure Select desired limit α on FDR Order p- values, p (1) p (2)... p (v) Let r be largest i such that p (i) i/v α 1 Reject all hypotheses corresponding to p (1),..., p (r). p-value p (i) 0 0 i/v i/v α 1

150 Benjamini & Hochberg Procedure Select desired limit α on FDR Order p- values, p (1) p (2)... p (v) Let r be largest i such that p (i) i/v α 1 Reject all hypotheses corresponding to p (1),..., p (r). p-value p (i) 0 0 i/v α i/v 1

151 FDR Example FWER Perm. Thresh. = voxels FDR Threshold = ,073 voxels

152 Where we re going Review of hypothesis tes-ng introduce mul-ple tes-ng problem Levels of inference (voxel/cluster/peak/set) Types of error rate control (none/fwer/fdr) Family- wise error control approaches (parametric/ nonparametric) FDR Rela-ng all of this to SPM output

153 Guess what? Now you have the knowledge needed to understand a huge/daun-ng table SPM spits out! Let s do it

154 SPM output

155 Which level of inference is missing? SPM output

156 what exci-ng conclusion can we make? SPM output

157 Recall: FWE correc-on shown earlier was super conserva-ve compared to FDR. Why does this look different? SPM output

158 SPM output What do you think K E is? What sta-s-c does the p- value correspond to?

159 The uncorrected stat doesn t take the search volume into account SPM output

160 See the note at the bo:om? SPM output

161 Do any clusters have more than one peak? SPM output

162 SPM output Last, but not least, you ll use this in lab. This is used to threshold clusters so you can look at only the significant ones

163 SPM output Compare this threshold to the FWE p- values for cluster stats

164 Ques-ons? That s it!

Controlling for multiple comparisons in imaging analysis. Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

Controlling for multiple comparisons in imaging analysis. Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison Controlling for multiple comparisons in imaging analysis Wednesday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison Motivation Run 100 hypothesis tests on null data using p

More information

Controlling for mul2ple comparisons in imaging analysis. Where we re going. Where we re going 8/15/16

Controlling for mul2ple comparisons in imaging analysis. Where we re going. Where we re going 8/15/16 Controlling for mul2ple comparisons in imaging analysis Wednesday, Lecture 2 Jeane?e Mumford University of Wisconsin - Madison Where we re going Review of hypothesis tes2ng introduce mul2ple tes2ng problem

More information

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding NITP, 2010 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods for controlling false positives Familywise

More information

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding UCLA Advanced NeuroImaging Summer School, 2007 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods

More information

Multiple Testing and Thresholding

Multiple Testing and Thresholding Multiple Testing and Thresholding NITP, 2009 Thanks for the slides Tom Nichols! Overview Multiple Testing Problem Which of my 100,000 voxels are active? Two methods for controlling false positives Familywise

More information

Contents. comparison. Multiple comparison problem. Recap & Introduction Inference & multiple. «Take home» message. DISCOS SPM course, CRC, Liège, 2009

Contents. comparison. Multiple comparison problem. Recap & Introduction Inference & multiple. «Take home» message. DISCOS SPM course, CRC, Liège, 2009 DISCOS SPM course, CRC, Liège, 2009 Contents Multiple comparison problem Recap & Introduction Inference & multiple comparison «Take home» message C. Phillips, Centre de Recherches du Cyclotron, ULg, Belgium

More information

New and best-practice approaches to thresholding

New and best-practice approaches to thresholding New and best-practice approaches to thresholding Thomas Nichols, Ph.D. Department of Statistics & Warwick Manufacturing Group University of Warwick FIL SPM Course 17 May, 2012 1 Overview Why threshold?

More information

Statistical Methods in functional MRI. Standard Analysis. Data Processing Pipeline. Multiple Comparisons Problem. Multiple Comparisons Problem

Statistical Methods in functional MRI. Standard Analysis. Data Processing Pipeline. Multiple Comparisons Problem. Multiple Comparisons Problem Statistical Methods in fnctional MRI Lectre 7: Mltiple Comparisons 04/3/13 Martin Lindqist Department of Biostatistics Johns Hopkins University Data Processing Pipeline Standard Analysis Data Acqisition

More information

Multiple comparisons problem and solutions

Multiple comparisons problem and solutions Multiple comparisons problem and solutions James M. Kilner http://sites.google.com/site/kilnerlab/home What is the multiple comparisons problem How can it be avoided Ways to correct for the multiple comparisons

More information

Extending the GLM. Outline. Mixed effects motivation Evaluating mixed effects methods Three methods. Conclusions. Overview

Extending the GLM. Outline. Mixed effects motivation Evaluating mixed effects methods Three methods. Conclusions. Overview Extending the GLM So far, we have considered the GLM for one run in one subject The same logic can be applied to multiple runs and multiple subjects GLM Stats For any given region, we can evaluate the

More information

Correction for multiple comparisons. Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh

Correction for multiple comparisons. Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh Correction for multiple comparisons Cyril Pernet, PhD SBIRC/SINAPSE University of Edinburgh Overview Multiple comparisons correction procedures Levels of inferences (set, cluster, voxel) Circularity issues

More information

Introductory Concepts for Voxel-Based Statistical Analysis

Introductory Concepts for Voxel-Based Statistical Analysis Introductory Concepts for Voxel-Based Statistical Analysis John Kornak University of California, San Francisco Department of Radiology and Biomedical Imaging Department of Epidemiology and Biostatistics

More information

Linear Models in Medical Imaging. John Kornak MI square February 22, 2011

Linear Models in Medical Imaging. John Kornak MI square February 22, 2011 Linear Models in Medical Imaging John Kornak MI square February 22, 2011 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Linear Models in Medical Imaging. John Kornak MI square February 19, 2013

Linear Models in Medical Imaging. John Kornak MI square February 19, 2013 Linear Models in Medical Imaging John Kornak MI square February 19, 2013 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Power analysis. Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison

Power analysis. Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Power analysis Wednesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Power Analysis-Why? To answer the question. How many subjects do I need for my study? How many runs per subject should

More information

Linear Models in Medical Imaging. John Kornak MI square February 23, 2010

Linear Models in Medical Imaging. John Kornak MI square February 23, 2010 Linear Models in Medical Imaging John Kornak MI square February 23, 2010 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Medical Image Analysis

Medical Image Analysis Medical Image Analysis Instructor: Moo K. Chung mchung@stat.wisc.edu Lecture 10. Multiple Comparisons March 06, 2007 This lecture will show you how to construct P-value maps fmri Multiple Comparisons 4-Dimensional

More information

Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples

Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples Human Brain Mapping 15:1 25(2001) Nonparametric Permutation Tests For Functional Neuroimaging: APrimer with Examples Thomas E. Nichols 1 and Andrew P. Holmes 2,3 * 1 Department of Biostatistics, University

More information

7/15/2016 ARE YOUR ANALYSES TOO WHY IS YOUR ANALYSIS PARAMETRIC? PARAMETRIC? That s not Normal!

7/15/2016 ARE YOUR ANALYSES TOO WHY IS YOUR ANALYSIS PARAMETRIC? PARAMETRIC? That s not Normal! ARE YOUR ANALYSES TOO PARAMETRIC? That s not Normal! Martin M Monti http://montilab.psych.ucla.edu WHY IS YOUR ANALYSIS PARAMETRIC? i. Optimal power (defined as the probability to detect a real difference)

More information

Linear Models in Medical Imaging. John Kornak MI square February 21, 2012

Linear Models in Medical Imaging. John Kornak MI square February 21, 2012 Linear Models in Medical Imaging John Kornak MI square February 21, 2012 Acknowledgement / Disclaimer Many of the slides in this lecture have been adapted from slides available in talks available on the

More information

Statistical inference on images

Statistical inference on images 7 Statistical inference on images The goal of statistical inference is to make decisions based on our data, while accounting for uncertainty due to noise in the data. From a broad perspective, statistical

More information

Advances in FDR for fmri -p.1

Advances in FDR for fmri -p.1 Advances in FDR for fmri Ruth Heller Department of Statistics, University of Pennsylvania Joint work with: Yoav Benjamini, Nava Rubin, Damian Stanley, Daniel Yekutieli, Yulia Golland, Rafael Malach Advances

More information

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols 1 of 14 3/30/2005 9:24 PM SnPM A Worked fmri Example SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols This page... introduction example data background design setup computation viewing results

More information

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd CHAPTER 2 Morphometry on rodent brains A.E.H. Scheenstra J. Dijkstra L. van der Weerd This chapter was adapted from: Volumetry and other quantitative measurements to assess the rodent brain, In vivo NMR

More information

Statistical Methods in functional MRI. False Discovery Rate. Issues with FWER. Lecture 7.2: Multiple Comparisons ( ) 04/25/13

Statistical Methods in functional MRI. False Discovery Rate. Issues with FWER. Lecture 7.2: Multiple Comparisons ( ) 04/25/13 Statistical Methods in functional MRI Lecture 7.2: Multiple Comparisons 04/25/13 Martin Lindquist Department of iostatistics Johns Hopkins University Issues with FWER Methods that control the FWER (onferroni,

More information

A Non-Parametric Approach

A Non-Parametric Approach Andrew P. Holmes. Ph.D., 1994. Chapter Six A Non-Parametric Approach In this chapter, a non-parametric approach to assessing functional mapping experiments is presented. A multiple comparisons randomisation

More information

Group Sta*s*cs in MEG/EEG

Group Sta*s*cs in MEG/EEG Group Sta*s*cs in MEG/EEG Will Woods NIF Fellow Brain and Psychological Sciences Research Centre Swinburne University of Technology A Cau*onary tale. A Cau*onary tale. A Cau*onary tale. Overview Introduc*on

More information

The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis

The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis The Effect of Correlation and Error Rate Specification on Thresholding Methods in fmri Analysis Brent R. Logan and Daniel B. Rowe, Division of Biostatistics and Department of Biophysics Division of Biostatistics

More information

Group (Level 2) fmri Data Analysis - Lab 4

Group (Level 2) fmri Data Analysis - Lab 4 Group (Level 2) fmri Data Analysis - Lab 4 Index Goals of this Lab Before Getting Started The Chosen Ten Checking Data Quality Create a Mean Anatomical of the Group Group Analysis: One-Sample T-Test Examine

More information

First-level fmri modeling

First-level fmri modeling First-level fmri modeling Monday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison What do we need to remember from the last lecture? What is the general structure of a t- statistic? How about

More information

Introduction to Neuroimaging Janaina Mourao-Miranda

Introduction to Neuroimaging Janaina Mourao-Miranda Introduction to Neuroimaging Janaina Mourao-Miranda Neuroimaging techniques have changed the way neuroscientists address questions about functional anatomy, especially in relation to behavior and clinical

More information

Geometric verifica-on of matching

Geometric verifica-on of matching Geometric verifica-on of matching The correspondence problem The correspondence problem tries to figure out which parts of an image correspond to which parts of another image, a;er the camera has moved,

More information

Network statistics and thresholding

Network statistics and thresholding Network statistics and thresholding Andrew Zalesky azalesky@unimelb.edu.au HBM Educational Course June 25, 2017 Network thresholding Unthresholded Moderate thresholding Severe thresholding Strong link

More information

Multiple Linear Regression: Global tests and Multiple Testing

Multiple Linear Regression: Global tests and Multiple Testing Multiple Linear Regression: Global tests and Multiple Testing Author: Nicholas G Reich, Jeff Goldsmith This material is part of the statsteachr project Made available under the Creative Commons Attribution-ShareAlike

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 14: Introduction to hypothesis testing (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 10 Hypotheses 2 / 10 Quantifying uncertainty Recall the two key goals of inference:

More information

General Factorial Models

General Factorial Models In Chapter 8 in Oehlert STAT:5201 Week 9 - Lecture 1 1 / 31 It is possible to have many factors in a factorial experiment. We saw some three-way factorials earlier in the DDD book (HW 1 with 3 factors:

More information

General Factorial Models

General Factorial Models In Chapter 8 in Oehlert STAT:5201 Week 9 - Lecture 2 1 / 34 It is possible to have many factors in a factorial experiment. In DDD we saw an example of a 3-factor study with ball size, height, and surface

More information

fmri Basics: Single Subject Analysis

fmri Basics: Single Subject Analysis fmri Basics: Single Subject Analysis This session is intended to give an overview of the basic process of setting up a general linear model for a single subject. This stage of the analysis is also variously

More information

Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates

Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates Supporting Information Appendix Cluster failure: Why fmri inferences for spatial extent have inflated false positive rates Anders Eklund, Thomas Nichols, Hans Knutsson Methods Resting state fmri data Resting

More information

arxiv: v1 [stat.ap] 1 Jun 2016

arxiv: v1 [stat.ap] 1 Jun 2016 Permutation-based cluster size correction for voxel-based lesion-symptom mapping arxiv:1606.00475v1 [stat.ap] 1 Jun 2016 June 3, 2016 Daniel Mirman a,b,1 Jon-Frederick Landrigan a Spiro Kokolis a Sean

More information

Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review

Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review Controlling the Familywise Error Rate in Functional Neuroimaging: A Comparative Review Thomas Nichols & Satoru Hayasaka Departments of Biostatistics University of Michigan, Ann Arbor, MI 48109, U.S.A.

More information

Lab 5: Multi Subject fmri

Lab 5: Multi Subject fmri USA SPM Short Course April 6-8, 2005 Yale University Lab 5: Multi Subject fmri Rik Henson mailto:rik.henson@mrc-cbu.cam.ac.uk MRC Cognition & Brain Sciences Unit, Cambridge, UK CONTENTS Goals Introduction

More information

Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls

Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls A B C Figure 1. Comparison of the frequency of centrality values for central London and our region of Soho studied. The comparison shows that Soho falls well within the distribution for London s streets.

More information

An improved theoretical P-value for SPMs based on discrete local maxima

An improved theoretical P-value for SPMs based on discrete local maxima An improved theoretical P-value for SPMs based on discrete local maxima K.J. Worsley August 9, 25 Department of Mathematics and Statistics, McGill University, 85 Sherbrooke St. West, Montreal, Québec,

More information

Bias in Resampling-Based Thresholding of Statistical Maps in fmri

Bias in Resampling-Based Thresholding of Statistical Maps in fmri Bias in Resampling-Based Thresholding of Statistical Maps in fmri Ola Friman and Carl-Fredrik Westin Laboratory of Mathematics in Imaging, Department of Radiology Brigham and Women s Hospital, Harvard

More information

Single Subject Demo Data Instructions 1) click "New" and answer "No" to the "spatially preprocess" question.

Single Subject Demo Data Instructions 1) click New and answer No to the spatially preprocess question. (1) conn - Functional connectivity toolbox v1.0 Single Subject Demo Data Instructions 1) click "New" and answer "No" to the "spatially preprocess" question. 2) in "Basic" enter "1" subject, "6" seconds

More information

Journal of Articles in Support of The Null Hypothesis

Journal of Articles in Support of The Null Hypothesis Data Preprocessing Martin M. Monti, PhD UCLA Psychology NITP 2016 Typical (task-based) fmri analysis sequence Image Pre-processing Single Subject Analysis Group Analysis Journal of Articles in Support

More information

R-Square Coeff Var Root MSE y Mean

R-Square Coeff Var Root MSE y Mean STAT:50 Applied Statistics II Exam - Practice 00 possible points. Consider a -factor study where each of the factors has 3 levels. The factors are Diet (,,3) and Drug (A,B,C) and there are n = 3 observations

More information

9.2 Types of Errors in Hypothesis testing

9.2 Types of Errors in Hypothesis testing 9.2 Types of Errors in Hypothesis testing 1 Mistakes we could make As I mentioned, when we take a sample we won t be 100% sure of something because we do not take a census (we only look at information

More information

Efficiency and design optimization

Efficiency and design optimization Efficiency and design optimization Tuesday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison Thanks to Tom Liu for letting me use some of his slides! What is the best way to increase your power?

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Data Visualisation in SPM: An introduction

Data Visualisation in SPM: An introduction Data Visualisation in SPM: An introduction Alexa Morcom Edinburgh SPM course, April 2010 Centre for Cognitive & Neural Systems/ Department of Psychology University of Edinburgh Visualising results remembered

More information

Data Visualisation in SPM: An introduction

Data Visualisation in SPM: An introduction Data Visualisation in SPM: An introduction Alexa Morcom Edinburgh SPM course, April 2015 SPMmip [-30, 3, -9] 3 Visualising results remembered vs. fixation contrast(s) < < After the results table - what

More information

Supplementary methods

Supplementary methods Supplementary methods This section provides additional technical details on the sample, the applied imaging and analysis steps and methods. Structural imaging Trained radiographers placed all participants

More information

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Xavier Le Faucheur a, Brani Vidakovic b and Allen Tannenbaum a a School of Electrical and Computer Engineering, b Department of Biomedical

More information

Multiple Comparisons of Treatments vs. a Control (Simulation)

Multiple Comparisons of Treatments vs. a Control (Simulation) Chapter 585 Multiple Comparisons of Treatments vs. a Control (Simulation) Introduction This procedure uses simulation to analyze the power and significance level of two multiple-comparison procedures that

More information

Fields. J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston. 1 Introduction 2. 2 Testing for the intensity of an activation in SPMs 3

Fields. J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston. 1 Introduction 2. 2 Testing for the intensity of an activation in SPMs 3 Statistical Inference and the Theory of Random Fields J-B. Poline A.P. Holmes K.J. Worsley K.J. Friston Contents 1 Introduction 2 2 Testing for the intensity of an activation in SPMs 3 2.1 Theory : : :

More information

Homework 1. Automa-c Test Genera-on. Automated Test Genera-on Idea: The Problem. Pre- & Post- Condi-ons 2/8/17. Due Thursday (Feb 16, 9 AM)

Homework 1. Automa-c Test Genera-on. Automated Test Genera-on Idea: The Problem. Pre- & Post- Condi-ons 2/8/17. Due Thursday (Feb 16, 9 AM) Homework 1 Due Thursday (Feb 16, 9 AM) Automa-c Test Genera-on Ques-ons? Automated Test Genera-on Idea: Automa-cally generate tests for sokware Why? Find bugs more quickly Conserve resources No need to

More information

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Predictive Analysis: Evaluation and Experimentation. Heejun Kim Predictive Analysis: Evaluation and Experimentation Heejun Kim June 19, 2018 Evaluation and Experimentation Evaluation Metrics Cross-Validation Significance Tests Evaluation Predictive analysis: training

More information

Pattern Recognition for Neuroimaging Data

Pattern Recognition for Neuroimaging Data Pattern Recognition for Neuroimaging Data Edinburgh, SPM course April 2013 C. Phillips, Cyclotron Research Centre, ULg, Belgium http://www.cyclotron.ulg.ac.be Overview Introduction Univariate & multivariate

More information

Evaluation. Evaluate what? For really large amounts of data... A: Use a validation set.

Evaluation. Evaluate what? For really large amounts of data... A: Use a validation set. Evaluate what? Evaluation Charles Sutton Data Mining and Exploration Spring 2012 Do you want to evaluate a classifier or a learning algorithm? Do you want to predict accuracy or predict which one is better?

More information

Chapter 6: Linear Model Selection and Regularization

Chapter 6: Linear Model Selection and Regularization Chapter 6: Linear Model Selection and Regularization As p (the number of predictors) comes close to or exceeds n (the sample size) standard linear regression is faced with problems. The variance of the

More information

Resources for Nonparametric, Power and Meta-Analysis Practical SPM Course 2015, Zurich

Resources for Nonparametric, Power and Meta-Analysis Practical SPM Course 2015, Zurich Resources for Nonparametric, Power and Meta-Analysis Practical SPM Course 2015, Zurich http://www.translationalneuromodeling.org/practical-sessions/ Preliminary Get this file: http://warwick.ac.uk/tenichols/zurich.pdf

More information

EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS

EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS EMPIRICALLY INVESTIGATING THE STATISTICAL VALIDITY OF SPM, FSL AND AFNI FOR SINGLE SUBJECT FMRI ANALYSIS Anders Eklund a,b,c, Thomas Nichols d, Mats Andersson a,c, Hans Knutsson a,c a Department of Biomedical

More information

Notes on Simulations in SAS Studio

Notes on Simulations in SAS Studio Notes on Simulations in SAS Studio If you are not careful about simulations in SAS Studio, you can run into problems. In particular, SAS Studio has a limited amount of memory that you can use to write

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

Surface-based Analysis: Inter-subject Registration and Smoothing

Surface-based Analysis: Inter-subject Registration and Smoothing Surface-based Analysis: Inter-subject Registration and Smoothing Outline Exploratory Spatial Analysis Coordinate Systems 3D (Volumetric) 2D (Surface-based) Inter-subject registration Volume-based Surface-based

More information

Pair-Wise Multiple Comparisons (Simulation)

Pair-Wise Multiple Comparisons (Simulation) Chapter 580 Pair-Wise Multiple Comparisons (Simulation) Introduction This procedure uses simulation analyze the power and significance level of three pair-wise multiple-comparison procedures: Tukey-Kramer,

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

fmri Analysis Sackler Ins2tute 2011

fmri Analysis Sackler Ins2tute 2011 fmri Analysis Sackler Ins2tute 2011 How do we get from this to this? How do we get from this to this? And what are those colored blobs we re all trying to see, anyway? Raw fmri data straight from the scanner

More information

Functional MRI in Clinical Research and Practice Preprocessing

Functional MRI in Clinical Research and Practice Preprocessing Functional MRI in Clinical Research and Practice Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization

More information

Lecture Transcript While and Do While Statements in C++

Lecture Transcript While and Do While Statements in C++ Lecture Transcript While and Do While Statements in C++ Hello and welcome back. In this lecture we are going to look at the while and do...while iteration statements in C++. Here is a quick recap of some

More information

Quality Checking an fmri Group Result (art_groupcheck)

Quality Checking an fmri Group Result (art_groupcheck) Quality Checking an fmri Group Result (art_groupcheck) Paul Mazaika, Feb. 24, 2009 A statistical parameter map of fmri group analyses relies on the assumptions of the General Linear Model (GLM). The assumptions

More information

We can see that some anatomical details are lost after aligning and averaging brains, especially on the cortical level.

We can see that some anatomical details are lost after aligning and averaging brains, especially on the cortical level. Homework 3 - Model answer Background: Listen to the lecture on Data formats (Voxel and affine transformation matrices, nifti formats) and group analysis (Group Analysis, Anatomical Normalization, Multiple

More information

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric. CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley 1 1 Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

Lecture note 7: Playing with convolutions in TensorFlow

Lecture note 7: Playing with convolutions in TensorFlow Lecture note 7: Playing with convolutions in TensorFlow CS 20SI: TensorFlow for Deep Learning Research (cs20si.stanford.edu) Prepared by Chip Huyen ( huyenn@stanford.edu ) This lecture note is an unfinished

More information

Fmri Spatial Processing

Fmri Spatial Processing Educational Course: Fmri Spatial Processing Ray Razlighi Jun. 8, 2014 Spatial Processing Spatial Re-alignment Geometric distortion correction Spatial Normalization Smoothing Why, When, How, Which Why is

More information

CS558 Programming Languages

CS558 Programming Languages CS558 Programming Languages Winter 2018 Lecture 7b Andrew Tolmach Portland State University 1994-2018 Dynamic Type Checking Static type checking offers the great advantage of catching errors early And

More information

FMRI Pre-Processing and Model- Based Statistics

FMRI Pre-Processing and Model- Based Statistics FMRI Pre-Processing and Model- Based Statistics Brief intro to FMRI experiments and analysis FMRI pre-stats image processing Simple Single-Subject Statistics Multi-Level FMRI Analysis Advanced FMRI Analysis

More information

Data 8 Final Review #1

Data 8 Final Review #1 Data 8 Final Review #1 Topics we ll cover: Visualizations Arrays and Table Manipulations Programming constructs (functions, for loops, conditional statements) Chance, Simulation, Sampling and Distributions

More information

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group

Zurich SPM Course Voxel-Based Morphometry. Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Zurich SPM Course 2015 Voxel-Based Morphometry Ged Ridgway (Oxford & UCL) With thanks to John Ashburner and the FIL Methods Group Examples applications of VBM Many scientifically or clinically interesting

More information

Hyperparameter optimization. CS6787 Lecture 6 Fall 2017

Hyperparameter optimization. CS6787 Lecture 6 Fall 2017 Hyperparameter optimization CS6787 Lecture 6 Fall 2017 Review We ve covered many methods Stochastic gradient descent Step size/learning rate, how long to run Mini-batching Batch size Momentum Momentum

More information

Microso 埘 Exam Dumps PDF for Guaranteed Success

Microso 埘 Exam Dumps PDF for Guaranteed Success Microso 埘 70 698 Exam Dumps PDF for Guaranteed Success The PDF version is simply a copy of a Portable Document of your Microso 埘 70 698 ques ons and answers product. The Microso 埘 Cer fied Solu on Associa

More information

CS1114 Section 8: The Fourier Transform March 13th, 2013

CS1114 Section 8: The Fourier Transform March 13th, 2013 CS1114 Section 8: The Fourier Transform March 13th, 2013 http://xkcd.com/26 Today you will learn about an extremely useful tool in image processing called the Fourier transform, and along the way get more

More information

Bootstrapping Methods

Bootstrapping Methods Bootstrapping Methods example of a Monte Carlo method these are one Monte Carlo statistical method some Bayesian statistical methods are Monte Carlo we can also simulate models using Monte Carlo methods

More information

Introduction to fmri. Pre-processing

Introduction to fmri. Pre-processing Introduction to fmri Pre-processing Tibor Auer Department of Psychology Research Fellow in MRI Data Types Anatomical data: T 1 -weighted, 3D, 1/subject or session - (ME)MPRAGE/FLASH sequence, undistorted

More information

ALE Meta-Analysis: Controlling the False Discovery Rate and Performing Statistical Contrasts

ALE Meta-Analysis: Controlling the False Discovery Rate and Performing Statistical Contrasts Human Brain Mapping 25:155 164(2005) ALE Meta-Analysis: Controlling the False Discovery Rate and Performing Statistical Contrasts Angela R. Laird, 1 P. Mickle Fox, 1 Cathy J. Price, 2 David C. Glahn, 1,3

More information

Fast or furious? - User analysis of SF Express Inc

Fast or furious? - User analysis of SF Express Inc CS 229 PROJECT, DEC. 2017 1 Fast or furious? - User analysis of SF Express Inc Gege Wen@gegewen, Yiyuan Zhang@yiyuan12, Kezhen Zhao@zkz I. MOTIVATION The motivation of this project is to predict the likelihood

More information

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing

EPI Data Are Acquired Serially. EPI Data Are Acquired Serially 10/23/2011. Functional Connectivity Preprocessing. fmri Preprocessing Functional Connectivity Preprocessing Geometric distortion Head motion Geometric distortion Head motion EPI Data Are Acquired Serially EPI Data Are Acquired Serially descending 1 EPI Data Are Acquired

More information

TOPOLOGICAL INFERENCE FOR EEG AND MEG 1. BY JAMES M. KILNER AND KARL J. FRISTON University College London

TOPOLOGICAL INFERENCE FOR EEG AND MEG 1. BY JAMES M. KILNER AND KARL J. FRISTON University College London The Annals of Applied Statistics 2010, Vol. 4, No. 3, 1272 1290 DOI: 10.1214/10-AOAS337 Institute of Mathematical Statistics, 2010 TOPOLOGICAL INFERENCE FOR EEG AND MEG 1 BY JAMES M. KILNER AND KARL J.

More information

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing

SPM8 for Basic and Clinical Investigators. Preprocessing. fmri Preprocessing SPM8 for Basic and Clinical Investigators Preprocessing fmri Preprocessing Slice timing correction Geometric distortion correction Head motion correction Temporal filtering Intensity normalization Spatial

More information

Vmware 2V0 641 Exam Dumps PDF for Guaranteed Success

Vmware 2V0 641 Exam Dumps PDF for Guaranteed Success Vmware 2V0 641 Exam Dumps PDF for Guaranteed Success The PDF version is simply a copy of a Portable Document of your Vmware 2V0 641 ques 韫 ons and answers product. The VMware Cer 韫 fied Professional 6

More information

[2:3] Linked Lists, Stacks, Queues

[2:3] Linked Lists, Stacks, Queues [2:3] Linked Lists, Stacks, Queues Helpful Knowledge CS308 Abstract data structures vs concrete data types CS250 Memory management (stack) Pointers CS230 Modular Arithmetic !!!!! There s a lot of slides,

More information

Introduction to hypothesis testing

Introduction to hypothesis testing Introduction to hypothesis testing Mark Johnson Macquarie University Sydney, Australia February 27, 2017 1 / 38 Outline Introduction Hypothesis tests and confidence intervals Classical hypothesis tests

More information

Quiz Section Week 3 April 12, Functions Reading from and writing to files Complexity

Quiz Section Week 3 April 12, Functions Reading from and writing to files Complexity Quiz Section Week 3 April 12, 2016 Functions Reading from and writing to files Complexity Hypothesis testing: how interesting is my data? Test statistic: a number that describes how interesting your data

More information

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of

More information

OPTIONAL EXERCISE 1: CREATING A FUSION PROJECT PART A

OPTIONAL EXERCISE 1: CREATING A FUSION PROJECT PART A Exercise Objec ves In the previous exercises, you were provided a full Fusion LIDAR dataset. In this exercise, you will begin with raw LIDAR data and create a new Fusion project one that will be as complete

More information

Spatial Regularization of Functional Connectivity Using High-Dimensional Markov Random Fields

Spatial Regularization of Functional Connectivity Using High-Dimensional Markov Random Fields Spatial Regularization of Functional Connectivity Using High-Dimensional Markov Random Fields Wei Liu 1, Peihong Zhu 1, Jeffrey S. Anderson 2, Deborah Yurgelun-Todd 3, and P. Thomas Fletcher 1 1 Scientific

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information