The Quantitative Imaging Network The National Cancer Institute Then: 1939 And Now: 2016



Yüklə 4,95 Mb.
səhifə2/10
tarix30.10.2018
ölçüsü4,95 Mb.
#75971
1   2   3   4   5   6   7   8   9   10

Presentation Abstracts
QIN Annual Meeting
Monday & Tuesday
April 10 & 11, 2017


Summary of the 2016 QIN-NCTN Planning Meeting

Report on the 2016 QIN-NCTN Planning Meeting
Lawrence Schwartz, David Mankoff, Robert Nordstrom, Lori Henderson,

Paul Kinahan, Susanna Lee, Andriy Fedorov, Charles Apgar, Mark Rosen

A one-day planning meeting was held in Philadelphia in December 2016. The purpose was to bring together thought leaders from the NCTN and QIN investigators and related groups for roundtable discussions on (1) what oncologists need for quantitative imaging with their oncology trials and (2) what imagers can offer to improve the efficacy of these trials. The specific goal of the meeting was to generate 4 to 6 ideas on how to develop prospective testing of quantitative imaging tools in national level clinical trials. The morning session started with a series of short presentations by oncologists involved with national trials in the areas of systemic therapy, locally targeted therapy, immunotherapy and precision oncology. This was rounded out by short presentations on the uses of imaging in clinical trials. The central part of the meeting was 4 parallel break-out sessions intended to generate ideas for prospective testing of quantitative imaging tools in national level clinical trials as mentioned above. The meeting concluded with a review of the breakout groups, a first pass at a summary, and a list of potential next steps.


The breakout sessions and following summary discussion led to a good exchange of ideas between imagers and NCTN therapy trial leaders. Specific discussion items are listed below. In addition, we provide a brief summary of: (1) QIN tool listing and descriptions needed to promote the use of QIN tool in NCTN trials, (2) a time scale for moving forward with QIN tool integration into NCTN trials, and (3) next steps and framework for greater collaboration between the QIN and the NCTN.
Details of the meeting will be discussed along with a discussion of the follow up actions for both the QIN and NCTN

Pathways to Clinical Trials Project

Hui-Kuo Shu MD, Ella Jones PhD, Richard Wahl MD,

John Buatti MD, Lori Henderson PhD
The quantitative imaging network (QIN) has made significant progress in developing an array of tools helpful to quantitative image analysis. Tools span multiple modalities of imaging (CT/MR/PET) and address a wide range of quantitative imaging based issues including: harmonization through improved phantom analysis, automated methods for lesion segmentation to improve consistency, algorithms capable of automated consistent feature generation, software for harmonization of image reconstruction, decision support tools using curated data, multi-parametric data integration tools including those with multiple image sets of one kind, those with different kinds of images and those with integration of genomic or other data, tools for development of robust repository platforms for data sharing and analysis across platforms. These tools have been informative to the group and furthermore led to a number of challenges that have revealed further opportunities for harmonization, improvement and importantly bridging the gaps in optimal quantitative image analysis.

The progress in quantitative imaging tool development creates new opportunities for practical clinical implementation across multi-site applications in real prospective clinical trials that will ultimately serve as a test bed for clinical practice decision making in oncology. Several tools will be discussed in the framework of building pathways from tool development and validation into prospective clinical trials. Proposals for these next steps will be made and practicality will be discussed.




Response Assessment in Lymphoma - imaging criteria and guidelines
Bruce D. Cheson, M.D.,
Georgetown University Hospital, Lombardi Comprehensive Cancer Center, Washington, D.C.

Standardized staging and response criteria for lymphomas are essential to define the location and extent of disease, suggest prognostic information, facilitate comparisons amongst studies, and assist regulatory agencies. The first accepted criteria for Hodgkin Lymphoma (HL) and non-Hodgkin lymphomas (NHL) were published in 1999 and, with the subsequent availability of FDG-PET-CT, were revised in 2007 to include PET-CT for response assessment, primarily for HL and diffuse large B-Cell NHL (DLBCL). A number of studies warranted modification of these guidelines: data that PET-CT is more sensitive/specific than CT for staging, PET is valuable in restaging other FDG-avid histologies (e.g., follicular NHL), as well as the availability of a validated 5-point scale, the Deauville criteria. In 2014 the Lugano Classification and companion paper on the use of PET in lymphoma established a new standard for staging and response assessment that is now widely used. It incorporated a modified Ann Arbor Classification and the 5-PS, eliminated bone marrow biopsies for HL and most patients with DLBCL if a PET was performed, included residual masses in complete remissions, as long as they were no longer FDG avid. These guidelines discouraged interim scans, especially in NHL outside of a clinical trial, although they appear useful in HL. Recommendations were also provided as to when a contrast enhanced CT scan should be performed along with a PET-CT, to minimize cost and radiation exposure. Finally, posttreatment surveillance scans were strongly discouraged. The international adoption of the Lugano Classification will help ensure that fewer patients are over- or under-treated, and will improve not only the conduct of clinical research, but the use of metabolic imaging in general practice.




Opportunities for Multi-applications of Quantitative Image Analysis Methods and Tools

across Modalities and Clinical Tasks
Maryellen Giger
The University of Chicago

The mission of the Quantitative Imaging Network (QIN) is to improve the role of quantitative imaging for clinical decision making in oncology by the developing and validating data acquisition, analysis methods, and tools to tailor treatment for individual patients and predict or monitor the response to drug or radiation therapy. Thus, some QIN research groups are focused on the development of quantitative image-based surrogate markers of cancer tumors for use in predicting response to therapy and ultimately aiding in patient management. These analysis methods and tools can be viewed as yielding image-based biomarkers or “virtual digital biopsies” for the predictive models. Such research involved in quantitative image analysis methods and tools includes (a) robustness studies across acquisition manufacturers, imaging protocols, reconstruction algorithms, and cancer subtypes, (b) development of quantitative features (phenotypes), (c) classifier design, (d) evaluation, and (e) clinical validation. In robustness studies, it is important to assess whether to use standardized image acquisition methods or develop algorithms that are robust across acquisitions. Potential computer-extracted phenotypes include volumetric, morphological, textural, and kinetic features, as well as those from deep learning through convolutional neural networks. As with many of the QIN teams, once an effective and efficient “pipeline” has been established and validated on for a particular clinical question with a particular image acquisition system, similar processes can be conducted and fine-tuned for other modalities and clinical questions. Examples of multi-applications of quantitative image analysis methods and tools yielding such transitions will be discussed.



ECOG-ACRIN Plans for clinical testing of QIN tools
David Mankoff
University of Pennsylvania

ECOG-ACRIN QIN U01 Resource
Enabling prospective testing of QIN tools in trials in ECOG-ACRIN and other NCTN groups is a goal of the ECOG-ACRIN QIN U01 Resource. Prospective testing provides a key assessment of a QI tool’s performance in the “real world” of clinical trials. This includes testing the QI tool’s applicability to the range of image and data quality expected in multi-center trials and clinical practice versus the more uninform data sets obtained in single trials at academic center or from selected archived trial datasets. In addition to testing QI tool performance, prospective inclusion in clinical trials also provides an opportunity to test tool implantation in setting distinct from the laboratory, including key components supporting QI tool use such as user interface and data reporting structures.

Ongoing discussions with QIN and NCTN members, including the QIN-NCTN planning meeting in December, 2016, have suggested avenues for accomplishing this goal. For QI tools ready for multi-center trials testing, two approaches to prospective testing have been suggested, depending upon the nature of the QI tool and the stage of trial development:


(1) Inclusion of an exploratory secondary objective that tests the QI tool’s performance. QI tools can be included in the design of new NCTN clinical trials, typically to test performance in assessing a specific trial endpoint or in providing new correlative science data. For example, a new tool for assessing a quantitative imaging response endpoint could test the tool’s assessment of response as a predictor of disease-free or overall survival compared to standard approaches. Alternatively, an informatics tool might test extracted imaging features as a biomarker for predicting specific types of therapy. In this type of approach, the inclusion of the new QI tool would be included in the early design of the clinical trial protocol as a secondary objective designed to test a specific hypothesis related to the QI tool, and tool testing would be a part of the trial from concept to trial completion.
(2) Prospectively planned analysis of a trial data set from an ongoing trial. As it may not always be possible to prospectively match QI tools to emerging clinical trial concepts, an alternative approach is planned exploratory analysis of imaging data collected as part of a clinical trials that can test a specific QI tool’s performance in analyzing multi-center data, similar to the prospective secondary endpoint approach described above. This approach accomplishes many of the same goals as prospective inclusion of the QI in a secondary trial aim, but does not test tool implementation in the trial and does allow adjustment of data collection to meet any special needs of the tool. However, this approach offers the option of analyzing datasets from ongoing trials where prospective exploratory QI endpoints might have been overlooked or not feasible.
This talk will review the steps required for QIN members to undertake prospective QI tool testing in ECOG-ACRIN and other NCTN group clinical trials. I will provide an overview of the types of trials planned and ongoing in ECOG-ACRIN that would be amenable to tool testing, along with some specific example of QIN tool endpoints in previous and planned ECOG-ACRIN trials. The talk will conclude with some specific suggestions for QIN members to become involved in early trial development with the goal of prospective testing of their QI tools.

Automated and adaptive segmentation of diverse cancer lesions

for treatment evaluation
Assaf Hoogi
Stanford University
Quantitative analysis of cancer lesions can help to assess the efficacy of cancer treatments. However, accurate characterization requires, as a first step, accurate and robust segmentation of target cancer lesions. The main challenges hindering segmentation of cancer lesions are: a) low contrast lesions, b) heterogeneous lesions, c) noisy images, d) and lesions that are located near organ’s boundary. In order to handle a specific challenge, current methods have been developed ad-hoc for a specific cancer lesion type in order to solve a specific challenge. Therefore, applying existing methods on diverse datasets in which lesions are located in different organs, screened by different modalities, or have different imaging characteristics usually result in a poor segmentation quality.

An adaptive framework that can handle high diversity of image characteristics would be highly desirable.

During our work, we proposed a significant improvement of deformable models by introducing a method for automatic adaptation of the 1) cost function parameters and the 2) optimal local surrounding size to the spatial characteristics in the image. Both criteria play a key-role in the optimization of the energy functional. By using these two novel ideas, we developed a generalization of the multi-lesion segmentation approach, applying exactly the same technique on several different lesion datasets with promising results. We used an iterative process that estimates the mentioned criteria via machine learning-based technique (CNN - Convolutional Neural Network). The joint framework (CNN-deformable models) captures the benefits of both approaches, aiming to overcome their limitations, and ultimately achieve significantly better results than either method alone.

We studied the effects of adaptive framework on segmentation performance, demonstrating its strength and capabilities by analyzing different datasets - MR liver lesions, CT liver lesions, DDSM breast cancer, CT kidney lesions, MR brain tumors and CT lung cancer (Fig. 1). Lesions with substantially high diversity of spatial texture were included. We compared our results with 1) a state of the art level set method that uses pre-defined fixed contour parameters (FCP) and with 2) two common-used CNN methods. To evaluate the methods, Dice coefficient was calculated to measure the overlap of each automated method and the average manual segmentation. Our method outperformed the state of the art methods in terms of its agreement with the manual marking and has an average Dice coefficient 0.17 higher than the FCP method, and average of 0.35 Dice improvement over the CNN methods.

The developed technique requires only a minimal user interaction. Moreover, it is more robust than state common-used methods, supplying more repeatable performance and lower dependence on the contour initialization. As far as we know, this technique is the first fully adaptive framework for creating a far more general segmentation solution than any other methods that are currently available. The promise of a single method working in multiple anatomic foci and modalities to automate assessment of cancer can substantially help in tracking lesion’s changes over time, which will be faster and more accurate than the current routine used in clinical protocols. This key idea can help to evaluate the treatment efficiency, resulting in adaptation of the best treatment for a specific patient, improving cure rates and patient experience. The ultimate health benefit of this work is thus substantial, potentially improving assessment of treatment response in all cancer patients.
macintosh hd:users:marina:desktop:screen shot 2017-02-06 at 7.44.07 am.png

Challenges and Collaborative Projects: Platforms and resources

Keyvan Farahani, PhD
Cancer Imaging Program

National Cancer Institute
Members of the quantitative imaging network share a common overarching goal in development of imaging software tools and methods to measure or predict response to cancer therapy. The wide range of imaging technologies and cancers covered by QIN activities provides ample opportunities for the network teams to engage in multi-center projects where they share common interests in evaluation of tools and methods for a given modality. In QIN, these multi-center activities are conducted through Challenges and Collaborative Projects (CCPs). Whereas challenges allow teams to benchmark performance of their tools against reference phantom and clinical datasets, collaborative projects facilitate analytical studies of tools, methods, and protocols. Overall CCPs provide an effective way to drive algorithmic excellence and build scientific consensus in a collegial environment. The results of CCPs are often disseminated through peer-reviewed scientific publications.

Currently performance of many CCPs are facilitated through NCI-sponsored resources such as the Cancer Imaging Archive (TCIA), and other platforms developed through NCI CBITT (Center for Biomedical Informatics and Information Technology) support, as well as independent commercial products. As performance of CCPs across the network reaches maturation, there is now a great opportunity to further disseminate CCP products, including open source tools, radiomics feature ontologies, and other artifacts through QIN public libraries and archives.



PET segmentation challenge
Reinhard Beichel
Univ. of Iowa
Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent quantitative radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. To assess segmentation quality and consistency at the multi-institutional level, we performed a Quantitative Imaging Network (QIN) based PET segmentation challenge. In this study, seven institutional members of the QIN participated. Participants were asked to segment a common set of phantom PET scans acquired over a range of imaging conditions as well as a second set of head and neck cancer (HNC) PET scans. Segmentations were generated at each institution using their preferred approach. In addition, participants were asked to repeat segmentations with a time interval between initial and repeat segmentation. This procedure resulted in overall 806 phantom insert and 641 lesion segmentations. Subsequently, the volume was computed from the segmentations and compared to the corresponding reference volume by means of statistical analysis. Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.


QIN Challenge: Breast MRI Metrics of Response (BMMR)
Nola Hylton, UCSF

David Newitt, UCSF

Jayashree Kalpathy-Cramer, MGH

Despina Kontos, University of Pennsylvania

Maryellen Giger, University of Chicago

Jennifer Drukteinis, Moffitt Cancer Center

Lawrence Hall, University of South Florida

Zheng Zhang, Brown University

Helga Marques, Brown University

Keyvan Farahani, NCI

MRI is effective for monitoring primary breast cancer response to neoadjuvant chemotherapy (NAC) and can provide prognostic information. American College of Radiology Imaging Network (ACRIN) trial 6657 evaluated contrast-enhanced MRI for assessment of response in patients with stage 2 or 3 breast cancer receiving NAC and found that tumor response measured by MRI was predictive of both pathologic complete response (pCR) and recurrence-free survival (RFS). The goals of the Breast MRI Metrics of Response (BMMR) Challenge were 1) to identify imaging metrics derivable from contrast-enhanced breast MR images acquired in the ACRIN 6657 trial, that show statistically-significant association with RFS, and 2) to demonstrate improvement in predictor performance over functional tumor volume (FTV), the primary imaging variable tested in ACRIN 6657.


The BMMR Challenge was developed by the QIN EC BMMR Working Group and was the first challenge to be performed under the new QIN guidelines for Challenges and Collaborative Projects (CCPs). Following a one-month training phase, the BMMR Challenge conducted a test phase from May through October 2016. MRI data from 162 ACRIN 6657/I-SPY 1 patients, annotated with RFS outcome and breast cancer subtype (defined by hormone receptor (HR) and HER2 receptor status) was made available on the TCIA website. In order to preserve the full ACRIN 6657 cohort for testing, a separate data set was provided on the TCIA website for the training phase consisting of 64 patients with RFS from a UCSF pilot NAC study. The challenge was managed in collaboration with Dr. Jayashree Kalpathy-Cramer through the QINLabs website. Three QIN groups (U. Chicago (M. Giger), Moffitt Hospital (J. Drukteinis) and MGH (J. Kalpathy-Cramer)), and one non-QIN group (U. Pennsylvania (D. Kontos)) submitted results for evaluation. Statistical analysis of the challenge results was performed by the ACRIN Biostatistical Center (Zheng Zhang and Helga Marques of the Brown U. Center for Statistical Sciences).

Sixty entries were submitted from 4 participating institutions and included metrics (features and classifiers) derived from analyses of tumor morphology and contrast kinetics, as well as metrics generated using unsupervised machine learning approaches. Metrics were evaluated for association with RFS using the c-statistic and compared to ACRIN 6657 results for the FTV predictor. Robustness of predictors was evaluated by ranking features/classifiers according to the percent of available data for which metrics were successfully measured. Additional characteristics that were considered included treatment time point, 3D versus 2D methods, semi- versus fully-automated methods, and use of image registration.

The BMMR Challenge results are currently being prepared for publication and will be presented at the 2017 QIN face-to-face meeting.

The Stanford Quantitative Image Feature Pipeline (QIFP)
Sandy Napel PhD, Daniel L. Rubin MD MS, Dev Gude BS, Sheryl John MS
Stanford University Department of Radiology
The QIFP, a project in its second year of funding, is intended to serve as a resource for researchers who are (1) developing imaging biomarkers that use radiomics features of tumors in medical images, (2) using imaging biomarkers to build predictive models for clinical variables (e.g., survival, response to therapy), and (3) using these predictive models to follow cohorts in clinical trials. In this talk I will cover, and illustrate with screen shots where possible, the goals, current status, and anticipated future developments of the QIFP.
Goals: the QIFP will embody:


  • a web-based interface for development and execution of configurable QIF processing pipelines

  • sharable library of QIF algorithms,

  • support for user-contributed QIFs via Docker containers,

  • tools for building and documenting cohorts for QIF processing via DICOM connectivity to images and other data stored in the Cancer Imaging Archive (TCIA), ePAD systems (another QIN project for image annotation/curation), local data stores and, PACS systems, and

  • machine learning algorithms for building predictive models for clinical variables from QIFs.


Current Status:

  • preliminary web-based interface available for testing, debugging, and initial studies; refined interface under development,

  • support for DICOM Segmentation Objects

  • QIF algorithms include Stanford 3D features (intensity statistics, volume and surface area, shape, edge sharpness, Haralicks textures), Stanford SIFT features,

  • features output as CSV, and include scan and reconstruction parameters from DICOM headers

  • preliminary LASSO algorithm for predictive model building and testing

  • can collect of semantic features via AIM files,

  • graphical display of pipeline configuration, and

  • several pre-built workflows and well as user-configurable workflows



Under development:

  • 2D QIF algorithms

  • other machine learning engines, e.g., SVM, links to other servers running deep learning,

  • user upload and configuration of Docker models for segmentation, feature computation and machine learning, and

  • interoperability of Docker modules with other QIN pipeline efforts

Additional information and demonstrations of the current status of the QIFP will be available during the poster sessions.



Yüklə 4,95 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   10




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə