Beyond the Assumption of Constant Hazard Rate in Estimating Incidence Rate on Current Status Data with Applications to Phase IV Cancer Trial

Similar documents
Package simsurv. May 18, 2018

Project Paper Introduction

A GENERAL GIBBS SAMPLING ALGORITHM FOR ANALYZING LINEAR MODELS USING THE SAS SYSTEM

Assessing the Quality of the Natural Cubic Spline Approximation

Model-based Recursive Partitioning for Subgroup Analyses

TTEDesigner User s Manual

Nested Sampling: Introduction and Implementation

Scalable Bayes Clustering for Outlier Detection Under Informative Sampling

Package rereg. May 30, 2018

Optimal designs for comparing curves

PART III APPLICATIONS

Bayes Factor Single Arm Binary User s Guide (Version 1.0)

On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme

Interactive Treatment Planning in Cancer Radiotherapy

Package DSBayes. February 19, 2015

Frequentist and Bayesian Interim Analysis in Clinical Trials: Group Sequential Testing and Posterior Predictive Probability Monitoring Using SAS

Monte Carlo Hospital

STATISTICS (STAT) 200 Level Courses. 300 Level Courses. Statistics (STAT) 1

Fast Calculation of Calendar Time-, Ageand Duration Dependent Time at Risk in the Lexis Space

Dose Distributions. Purpose. Isodose distributions. To familiarize the resident with dose distributions and the factors that affect them

Package FHtest. November 8, 2017

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Bayesian Inference for Sample Surveys

Estimating survival from Gray s flexible model. Outline. I. Introduction. I. Introduction. I. Introduction

idem: Inference in Randomized Controlled Trials with Death a... Missingness

The Bootstrap and Jackknife

Unified Methods for Censored Longitudinal Data and Causality

STATISTICS (STAT) 200 Level Courses Registration Restrictions: STAT 250: Required Prerequisites: not Schedule Type: Mason Core: STAT 346:

Package sp23design. R topics documented: February 20, Type Package

Standard error estimation in the EM algorithm when joint modeling of survival and longitudinal data

Treatment Planning Optimization for VMAT, Tomotherapy and Cyberknife

100 Myung Hwan Na log-hazard function. The discussion section of Abrahamowicz, et al.(1992) contains a good review of many of the papers on the use of

Working with Composite Endpoints: Constructing Analysis Data Pushpa Saranadasa, Merck & Co., Inc., Upper Gwynedd, PA

Appendix A: An Alternative Estimation Procedure Dual Penalized Expansion

Statistical Analysis Using Combined Data Sources: Discussion JPSM Distinguished Lecture University of Maryland

A Comparison of Modeling Scales in Flexible Parametric Models. Noori Akhtar-Danesh, PhD McMaster University

STATISTICS (STAT) Statistics (STAT) 1

Statistical Matching using Fractional Imputation

CHAPTER 1 INTRODUCTION

Converting Fractions to Decimals

Pooling Clinical Data: Key points and Pitfalls. October 16, 2012 Phuse 2012 conference, Budapest Florence Buchheit

in this course) ˆ Y =time to event, follow-up curtailed: covered under ˆ Missing at random (MAR) a

Module I: Clinical Trials a Practical Guide to Design, Analysis, and Reporting 1. Fundamentals of Trial Design

Package bayesdp. July 10, 2018

OPTIMIZATION OF RELIABILITY VERIFICATION TEST STRATEGIES

AWELL-KNOWN technique for choosing a probability

Visualizing NCI Seer Cancer Data

Extensions to the Cox Model: Stratification

Missing Data and Imputation

Package PTE. October 10, 2017

The partial Package. R topics documented: October 16, Version 0.1. Date Title partial package. Author Andrea Lehnert-Batar

Correctly Compute Complex Samples Statistics

Data exchange between remote monitoring databases and local electronic patient record system Implementation based on international standards

CHAPTER 7 EXAMPLES: MIXTURE MODELING WITH CROSS- SECTIONAL DATA

DESIGN AND REALISATION OF A FRAMEWORK FOR DEVICE ENDCOMMUNICATION ACCORDING TO THE IEEE STANDARD

Dose Calculation and Optimization Algorithms: A Clinical Perspective

Illumination. Michael Kazhdan ( /657) HB Ch. 14.1, 14.2 FvDFH 16.1, 16.2

A Modified Weibull Distribution

Basic Medical Statistics Course

Package MIICD. May 27, 2017

weibull Documentation

MULTISTATE MODELS FOR ECOLOGICAL

NONPARAMETRIC SUMMARY CURVES FOR COMPETING RISKS IN R

Integer Programming Chapter 9

Side-effects of checkpoints inhibitors

Randolph E. Bucklin Catarina Sismeiro The Anderson School at UCLA. Overview. Introduction/Clickstream Research Conceptual Framework

for Images A Bayesian Deformation Model

Geometric Steiner Trees

OnCore Enterprise Research. Subject Administration Full Study

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Photon beam dose distributions in 2D

Handbook of Statistical Modeling for the Social and Behavioral Sciences

An Excel Add-In for Capturing Simulation Statistics

Volume 5, Issue 7, July 2017 International Journal of Advance Research in Computer Science and Management Studies

Accelerated Life Testing Module Accelerated Life Testing - Overview

Statistical Analysis of List Experiments

SE420 - Software Quality Assurance

Media centre Electromagnetic fields and public health: mobile phones

Module 18: Diffraction-I Lecture 18: Diffraction-I

Basic Medical Statistics Course

gsdesign: An R Package for Designing Group Sequential Clinical Trials Version 2.0 Manual

Image-based Monte Carlo calculations for dosimetry

A BRIEF SURVEY ON FUZZY SET INTERPOLATION METHODS

Regularization and model selection

Minitab detailed

Radiation therapy treatment plan optimization

Qualint: Test for Qualitative Interactions from Complete Data

Nonrigid Registration using Free-Form Deformations

Fundamental Data Manipulation Techniques

DUAL energy X-ray radiography [1] can be used to separate

Merging Files of Time-varying Covariates

Missing data a data value that should have been recorded, but for some reason, was not. Simon Day: Dictionary for clinical trials, Wiley, 1999.

Ludwig Fahrmeir Gerhard Tute. Statistical odelling Based on Generalized Linear Model. íecond Edition. . Springer

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Missing Data Analysis for the Employee Dataset

4.2.2 Usability. 4 Medical software from the idea to the finished product. Figure 4.3 Verification/validation of the usability, SW = software

Chapter 9 Field Shaping: Scanning Beam

Modelling Personalized Screening: a Step Forward on Risk Assessment Methods

Evaluating generalization (validation) Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support

STATA Note 5. One sample binomial data Confidence interval for proportion Unpaired binomial data: 2 x 2 tables Paired binomial data

Transcription:

Beyond the Assumption of Constant Hazard Rate in Estimating Incidence Rate on Current Status Data with Applications to Phase IV Cancer Trial Deokumar Srivastava, Ph.D. Member Department of Biostatistics St. Jude Children s Research Hospital Society for Clinical Trials May 17-20, 2015 Joint work with Liang Zhu, Melissa Hudson, Jianmin Pan, and Shesh N Rai

Outline Motivating Example Statistical Methodology Simulation and Application Conclusion

Motivating Example Hudson et al. (2007), JCO, 3635-3643 Noninvasive evaluation of late anthracycline cardiac toxicity in childhood cancer survivors (CCS) CCS treated with anthracylcines and cardiac radiation are at increased risk for late onset of cardiac toxicity. Not feasible to evaluate patients frequently on a long term basis to obtain the data. Only the current status of the patient with onset prior to current status. Risk Groups: At Risk: treated with cardiotoxic therapy Not at Risk: Did not receive cardiotoxic therapy

Description of Data

Cardiac Toxicity Measures Fractional Shortening (SF): Left ventricular systolic performance Abnormal if SF < 0.28 Afterload (AF): Left ventricular endsystolic wall stress Abnormal if AF > 74 g/cm 2

Statistical Problem To estimate the cumulative incidence and confidence intervals.

Interval Censored Data Interval Censored Data Instead of observing the exact time (T) of the event it is known that the event occurred in the interval (L,R] Current Status Data (Case I, ICD) Either L=0 or R=, that is either left- or right-censored General or Case II ICD At least one with both L and R (0, ) that is, include at least one finite interval away from zero.

General Model X(t) state occupied by a survivor at time t T Observation time to death, cardiac failure or survey U Time to cardiac abnormality Alive with No Cardiac abnormality 1 1 (u) 1 1 2 2 (u) 1 3 Alive with Cardiac abnormality 3 (t u) The intensities 1 (u), 2 (u), and 3 (t u) are transition rates Death/Cardiac Failure

Psuedo Survival Functions The pseudo survival functions corresponding to 1 (u), 2 (u), and 3 (t u) are: t Q i t = exp i ( 0 v dv} for i=1,2 t Q 3 t u = exp { 3 v u dv} 0 t Q t = exp 1 v + 2 (v) 0 dv = Q 1 (t)q 2 (t) The probability that the time to the first event alive with abnormal value or death with normal value exceeds t.

Likelihood Construction t Interest in estimating Λ 1 t = 1 u du 0 cumulative incidence function., the

Likelihood Function The likelihood function is n L θ = {a i logl 1 t i + b i logl 2 t i + c i logl 3 t i i=1 + d i logl 4 (t i )} where a i = δ i 1 γ i, b i = (1 δ i ) 1 γ i, c i = δ i γ i, and d i = (1 δ i )γ i are the indicators corresponding to observation types 1 to 4

Analysis using Piecewise Exponential Model Rai et al. (2013) conducted the analysis assuming exponential and piecewise exponential distribution. Assumed 1 to be piecewise exponential with parameters 11 for t < t c (2-years or 5-years) and 12 for t t c. One has to know the number and location of change points One has to assume that the intensity function is constant within each period. These assumptions are hard to justify in practice and an alternative approach free of these assumptions would be preferable.

Analysis using Weibull Model The likelihood based on Weibull distribution for the special case 2 = 3 = 0 and a i = c i = 0 can be represented as, n log L 1, α 1 = b i 1 t α 1 i i=1 + n i=1 d i log 1 exp ( 1 t i α 1 ) The maximum likelihood estimates can be obtained by solving the likelihood equations obtained by taking the derivative of the log-likelihood with respect to the parameters and equating them to zero.

Simulation Study Setting I: Data generated from piecewise exponential and the estimates were obtained using piecewise exponential and Weibull distributions. Setting II: Data generated from two different Weibull distribution to mimic slow increasing hazard and rapidly increasing hazard and once again the estimates of CI were obtained Weibull and piecewise exponential distributions. Setting III: Data was generated from piecewise exponential as in Setting 1 but the change point was assumed to be at t-1, t and t+1 years.

Performance of Weibull Model for the data generated from piecewise Exponential Distribution with 11 = 0.15 and 12 = 0.4 (n=400) Follow-up time/change point Time True Piecewise Exponential Weibull CI CI (SE) CI (SE) 10 years/ 2 year 1 0.14 0.14 (0.03) 0.14 (0.03) 2 0.26 0.26 (0.05) 0.32 (0.04) 3 0.50 0.51 (0.03) 0.50 (0.04) 4 0.67 0.67 (0.03) 0.65 (0.03) 5 0.78 0.78 (0.03) 0.76 (0.03) 6 0.85 0.85 (0.02) 0.84 (0.02) 7 0.90 0.90 (0.02) 0.90 (0.02) 8 0.93 0.93 (0.02) 0.94 (0.02) 9 0.95 0.95 (0.01) 0.96 (0.01) 10 0.97 0.97 (0.01) 0.98 (0.01) 10 years/ 5 year 1 0.14 0.14 (0.02) 0.10 (0.03) 2 0.26 0.26 (0.03) 0.24 (0.03) 3 0.36 0.36 (0.03) 0.38 (0.03) 4 0.45 0.45 (0.04) 0.50 (0.03) 5 0.53 0.53 (0.04) 0.61 (0.03) 6 0.68 0.69 (0.03) 0.70 (0.03) 7 0.79 0.79 (0.03) 0.78 (0.03) 8 0.86 0.86 (0.03) 0.83 (0.03) 9 0.90 0.90 (0.03) 0.88 (0.03) 10 0.94 0.93 (0.02) 0.91 (0.02)

Performance of Piecewise Exponential with three assumed change points (a), (b) and (c) for the data generated from Weibull distributions corresponding to Cases A and B with 10 year follow-up and sample size n=400 Cases * time True Weibull Piecewise Exponetial (a) ** Piecewise Exponetial (b) ** Piecewise Exponetial (c) ** CI CI (SE) CI (SE) CI (SE) CI (SE) A 1 0.06 0.06 (0.02) 0.03 (0.03) 0.06 (0.02) 0.09 (0.02) 2 0.17 0.17 (0.03) 0.22 (0.02) 0.12 (0.04) 0.17 (0.03) 3 0.30 0.30 (0.03) 0.37 (0.02) 0.32 (0.03) 0.24 (0.05) 4 0.43 0.43 (0.03) 0.49 (0.03) 0.47 (0.03) 0.43 (0.03) 5 0.56 0.56 (0.03) 0.59 (0.03) 0.59 (0.03) 0.58 (0.03) 6 0.66 0.67 (0.03) 0.67 (0.03) 0.68 (0.03) 0.68 (0.03) 7 0.75 0.75 (0.03) 0.74 (0.03) 0.75 (0.03) 0.76 (0.03) 8 0.82 0.82 (0.03) 0.79 (0.03) 0.81 (0.03) 0.82 (0.03) 9 0.88 0.88 (0.03) 0.83 (0.02) 0.85 (0.02) 0.87 (0.03) 10 0.92 0.92 (0.02) 0.86 (0.02) 0.88 (0.02) 0.90 (0.02) B 1 0.03 0.03 (0.01) 0.10 (0.02) 0.04 (0.02) 0.14 (0.01) 2 0.13 0.14 (0.03) 0.18 (0.03) 0.07 (0.03) 0.26 (0.02) 3 0.28 0.28 (0.04) 0.26 (0.04) 0.35 (0.02) 0.36 (0.03) 4 0.46 0.46 (0.04) 0.33 (0.05) 0.54 (0.03) 0.45 (0.04) 5 0.62 0.62 (0.03) 0.61 (0.03) 0.68 (0.03) 0.52 (0.04) 6 0.76 0.76 (0.03) 0.78 (0.03) 0.77 (0.03) 0.59 (0.04) 7 0.86 0.86 (0.02) 0.87 (0.02) 0.84 (0.02) 0.83 (0.03) 8 0.92 0.92 (0.02) 0.92 (0.02) 0.89 (0.02) 0.93 (0.02) 9 0.96 0.96 (0.01) 0.95 (0.01) 0.92 (0.02) 0.97 (0.01) 10 0.98 0.98 (0.01) 0.97 (0.01) 0.94 (0.01) 0.99 (0.01)

Performance of Weibull Model and Piecewise Exponential Distribution for varying change points when the data is generated from piecewise Exponential Distribution with 11 = 0.15 and 12 = 0.4 (n=400) Follow-up time/change point Time True Weibull Piecewise Exponetial (a) ** Piecewise Exponetial (b) ** Piecewise Exponetial (c) ** 10 years/ 2 year 1 0.14 0.14 (0.03) 0.10 (0.04) 0.14 (0.03) 0.17 (0.03) 2 0.26 0.32 (0.04) 0.36 (0.03) 0.26 (0.05) 0.32 (0.04) 3 0.50 0.50 (0.04) 0.55 (0.03) 0.51 (0.03) 0.44 (0.05) 4 0.67 0.65 (0.03) 0.68 (0.03) 0.67 (0.03) 0.64 (0.03) 5 0.78 0.76 (0.03) 0.77 (0.02) 0.78 (0.03) 0.77 (0.03) 6 0.85 0.84 (0.02) 0.84 (0.02) 0.85 (0.02) 0.85 (0.02) 7 0.90 0.90 (0.02) 0.89 (0.02) 0.90 (0.02) 0.90 (0.02) 8 0.93 0.94 (0.02) 0.92 (0.02) 0.93 (0.02) 0.94 (0.02) 9 0.95 0.96 (0.01) 0.94 (0.01) 0.95 (0.01) 0.96 (0.01) 10 0.97 0.98 (0.01) 0.96 (0.01) 0.97 (0.01) 0.97 (0.01) 10 years/ 5 year 1 0.14 0.10 (0.03) 0.13 (0.02) 0.14 (0.02) 0.15 (0.01) 2 0.26 0.24 (0.03) 0.25 (0.03) 0.26 (0.03) 0.27 (0.02) 3 0.36 0.38 (0.03) 0.35 (0.04) 0.36 (0.03) 0.38 (0.03) 4 0.45 0.50 (0.03) 0.43 (0.04) 0.45 (0.04) 0.47 (0.03) 5 0.53 0.61 (0.03) 0.59 (0.03) 0.53 (0.04) 0.55 (0.04) 6 0.68 0.70 (0.03) 0.71 (0.03) 0.69 (0.03) 0.62 (0.04) 7 0.79 0.78 (0.03) 0.79 (0.03) 0.79 (0.03) 0.77 (0.03) 8 0.86 0.83 (0.03) 0.85 (0.03) 0.86 (0.03) 0.86 (0.03) 9 0.90 0.88 (0.03) 0.89 (0.02) 0.90 (0.03) 0.91 (0.03) 10 0.94 0.91 (0.02) 0.92 (0.02) 0.94 (0.02) 0.96 (0.02)

Application to AAF data Nonparam etric Weibull Exp-1 Exp-2 Year CI CI SE CI SE CI SE 1 0 0 0 0.016 0.003 0.000 0.000 2 0 0 0 0.032 0.005 0.000 0.000 3 0 0 0 0.047 0.008 0.000 0.000 4 0 0 0 0.063 0.010 0.000 0.000 5 0 0 0 0.079 0.013 0.000 0.000 6 0 0.062 0.025 0.095 0.016 0.030 0.005 7 0.059 0.092 0.028 0.110 0.018 0.059 0.010 8 0.059 0.114 0.032 0.126 0.021 0.089 0.015 9 0.125 0.133 0.037 0.142 0.023 0.119 0.020 10 0.200 0.150 0.043 0.158 0.026 0.149 0.025 11 0.200 0.165 0.048 0.174 0.029 0.178 0.030 12 0.200 0.179 0.054 0.189 0.031 0.208 0.035 13 0.200 0.192 0.060 0.205 0.034 0.238 0.040 14 0.200 0.204 0.066 0.221 0.037 0.267 0.045 15 0.200 0.215 0.072 0.237 0.039 0.297 0.050 20 0.250 0.264 0.098 0.316 0.052 0.446 0.075

Application to AAF data

Conclusions An alternative to piecewise exponential distribution for modeling CI for the current status data. Proposed the use of Weibull distribution which overcomes the limitations inherent with piecewise exponential approach. Assuming the location of cut-points Deciding on the number of cut-points Assuming the constant hazard within each time period. The simulations and applications suggest that the proposed approach is reasonable and provides for a viable alternative to modeling current status data.

Future Work 34 observations missing in AF and 6 in both AF and FS. Imputation approach to provide more efficient estimates of the CI Manuscript evaluating the performance of imputation is currently under review in Statistical Methods in Medical Research. AF is often evaluated as a continuous measure and FS is evaluated as a binary measure (normal/abnormal) Both are correlated (Joint modeling is appropriate and needed) We are working on a manuscript that compares the Bayesian and frequentist approaches for evaluating treatment effect with mixed endpoints.

Thank You!?