Open Access
How to translate text using browser tools
1 January 2011 Radiation Exposures in Medicine: Biological and Public Health Significance

The 19th American Statistical Association's 2010 conference on Radiation and Health focused on the human health effects of ionizing and non-ionizing radiation exposure. The purpose of these biennial conferences is to provide the latest information on new methodologies and research from diverse perspectives with the goal to create an understanding of the topic from a more global perspective. These viewpoints include the laboratory, the clinic and epidemiological studies of populations exposed to ionizing radiation from occupations, therapy or environmental contamination. The research scientists in attendance were statisticians, epidemiologists, physicians, risk assessors, biologists and physicists working in radiation research. Conference topics were selected by an Organizing Committee and reflected recent scientific advances or events evoking public concern. The meeting was devoted to the general theme of Radiation Exposures in Medicine: Biological and Public Health Significance with five scientific sessions in the mornings and the evenings. Contributed posters were discussed during one afternoon with other afternoons devoted to informal meetings and outdoor recreational activities. Nine new investigators were honored with travel awards to present their work, and Dr. Stephanie Lamart was chosen by the judging committee as the outstanding poster presenter. The conference keynote speaker, Dr. Dale Preston, delivered the evening banquet address entitled “Radiation Risk Estimation: Thoughts and Digressions”. The five scientific sessions were (1) Exposure to medical personnel, patients, and the public: trends and issues, (2) Cancer and non-cancer late effects of therapeutic radiation, (3) Genetic susceptibility to radiation, (4) New technologies in radiation medicine, and (5) Low-dose radiation effects. Approximately 80 researchers attended the conference from the U.S., Japan, United Kingdom, Germany, Canada, Taiwan, The Netherlands and Brazil. The next meeting will be held in June 2012. A summary of each session follows.

SESSION 1: EXPOSURE TO MEDICAL PERSONNEL, PATIENTS, AND THE PUBLIC: TRENDS AND ISSUES

Mary Schubauer-Berigan and Alice Sigurdson, Organizers; Mary Schubauer-Berigan, Discussant

This session was designed to address public and scientific concerns related to the increased use of medical radiation and exposure to workers from nuclear medicine and CT scan imaging. The aim was to understand dosimetric issues at the worker and the patient level, implications for worker populations, patient risks from imaging studies, and follow-up of regulatory recommendations by national bodies, such as the National Council on Radiation Protection and Measurements (NCRP). We sought to cover new information and issues from a micro- to a macro-perspective. In addition to the sixfold increase in medical radiation exposure to the general U.S. population since the 1980s, there are issues and challenges in quantifying the variability in exposure from the same imaging test, and there are difficulties in quantifying exposure to physicians or other medical personnel conducting interventional radiation procedures.

Challenges in occupational dosimetry for interventional fluoroscopy

Dr. Donald Miller (National Naval Medical Center) opened the session by describing the challenges in using badge readings for radiation health monitoring of interventional radiologists, interventional cardiologists and other practitioners. Personal characteristics, including body size and shape, as well as habits related to wearing badges are important determinants of dose variation. For example, the fit of the apron or thyroid collar can influence the areas of the body that are exposed, and wearing the badge on the hip or shoulder can vary the badge dose by a factor of two to six. Variability exists among regulatory authorities concerning which calculation to use for monitoring, with some using effective dose equivalent and others using effective dose; there is no international consensus on the algorithms used to calculate the effective dose from dosimeters. The second challenge is compliance. Physicians may confuse their neck and waist badges, may wear someone else's badge, may forget to wear a badge or to turn it in, may consciously choose to not wear a badge, or may work at several hospitals so that a “total” badge dose would not be reconciled between multiple employers. Surveys about badge-wearing compliance have revealed that physicians who perform interventional procedures wear their badges sometimes, infrequently or never between 25% and 43% of the time. Reasons for non-compliance include loss of income when dose limits are reached, added paperwork if small fractions of the dose limit are exceeded, intervention by the radiation safety officer that is viewed as punitive, and no apparent added value to monitoring. Issues related to the dosimeter readings, use of shielding and compliance go beyond monitoring for safety since these same recorded badge readings will be used for conducting epidemiological risk assessment for the radiation exposures received by interventional physicians.

Personnel dose estimation for interventional cardiology and other higher-dose procedures

Dr. Steven Simon (National Cancer Institute) continued the theme of the challenges to estimate doses to working personnel from some higher-dose procedures such as interventional cardiology. He began with a thoughtful discourse on the application of effective dose and the circumstances for which they are and are not informative. For epidemiological studies and risk assessment, absorbed dose to the organ of interest is most appropriate and preferred. The reason effective dose is inappropriate is because it has already incorporated estimates of organ-specific risks and therefore should not be used to estimate the risk directly. Nevertheless, often for simplicity, effective dose is commonly used, but caveats about actual organ dose should be kept in mind. For example, if a physician wears an apron and the badge is beneath the apron, then organs under the apron are protected (and their doses are measured accurately), but the absorbed dose to the skin, brain and thyroid might be 20 times the reported effective dose. Summaries of effective doses for various cardiac and non-cardiac procedures ranged from 0.2 to 20 µSv per procedure. Typical annual effective doses for technologists in nuclear medicine were about 3 mSv per year (PET) or 2 mSv per year (non-PET). Discussion points raised included the appropriate focus on absorbed organ dose as a basis for risk assessment rather than effective dose and on the need for systematic evaluation of doses for medical personnel involved in high-dose procedures (e.g., interventional cardiologists). The importance of incorporating apron use in medical worker studies was underscored when such studies rely on badge dose monitoring data. In the discussion that followed, it was regrettably noted that there is no centralized U.S. dose registry system where a working lifetime dose could be assembled for epidemiological studies.

Exposures of medical workers and the dose registry of Canada

Dr. Daniel Krewski (University of Ottawa Faculty of Medicine) described studies of Canadian medical workers using the Canadian National Dose Registry (NDR) with linkage to mortality and incident cancer registries. The NDR database has been maintained since 1950 and covers essentially all the monitored radiation workers in Canada. Within the NDR are 67,562 medical workers who are physicians, nurses, nuclear medicine technicians, radiation technologists, physicists and others occupationally exposed to medical sources of radiation. Since 1951, annual doses to medical workers have declined from about 2.5 mSv/year to slightly under 0.5 mSv/year. Nuclear medicine technicians had the highest annual doses, but these have also declined with time from about 10 mSv/year to 2 mSv/year. The average cumulative lifetime dose in the cohort is 3.8 mSv. The Canadian medical worker studies show a very strong healthy worker effect for cancer and non-cancer causes of death and cancer incidence. Only thyroid cancer was in excess with a standardized incidence ratio of 1.7 (95% confidence interval: 1.4–2.1) in males and females combined. Using individual doses in internal analyses, no cancer site showed a significantly elevated excess relative risk. Despite the availability of dosimetry records, about 25% of the cohort could not be linked to registries due to lack of information on gender, date of birth and other identifying characteristics. Other limitations are the lack of risk factor information, badge placement practices and type of procedures performed (such as working with fluoroscopy). Nevertheless, it is clear that radiation exposure over time has decreased, and there does not appear to be an increased risk of incident cancer in the Canadian medical radiation workers. The thyroid cancer excess may be explained by increased surveillance in a cohort of persons with access to and knowledge about health care. With such low doses it may be difficult to have sufficient power to conduct quantitative dose–response analyses. Discussion focused on the utility and uniqueness of the Canadian NDR and the importance of analyzing separately the medical specialties involved in higher-dose procedures (e.g., interventional cardiologists and nuclear medicine practitioners), which Dr. Krewski indicated should be feasible in the NDR.

Medical imaging and radiation doses to the public: variation in the radiation associated with CT examinations and cumulative medical exposure over time

Dr. Rebecca Smith-Bindman (University of California at San Francisco) described the scope and widespread use of medical imaging in the U.S. On average, individuals will undergo 1–5 medical imaging tests per year. Studies focusing on Medicare enrollees show that approximately 4500 imaging examinations are performed on 1000 enrollees per year. Threefold increases in the number of advanced medical imaging tests have occurred in the past 15 years. Dr. Smith-Bindman and her colleagues wanted to understand the doses from CT scans, which as a group are currently the largest cumulative source of medical radiation exposure to the U.S. population. They surveyed and abstracted technical parameters for CT scans at four large hospitals in the San Francisco Bay Area and found striking variation in doses to patients across the different hospitals but also with the same machines in the same hospitals for the same tests. There was a 13-fold variation between the highest and lowest dose for each study type. They also calculated the numbers of increased cancers that might be expected based on the present annual rate of imaging in the U.S. For example, 1 in 270 women aged 40 years who underwent CT coronary angiography are expected to develop cancer from that CT scan (for men, 1 in 600). CT technology has changed from the lower-dose single-slice scanners to multi-slice helical scanners that scan at faster speeds and expose larger amounts of tissue. Full body scans are used for trauma or screening, and higher doses improve the picture clarity. Future work by Dr. Smith-Bindman will include assessing risk of medical imaging within a large HMO research network. In the meantime, the FDA has announced an initiative to reduce radiation from medical imaging, and the U.S. Congress has held hearings to understand the radiation risks from medical imaging. In some situations the image clarity could be reduced, decreasing the radiation exposure to the patient, and would not sacrifice clinical usefulness of the medical image. Radiation exposure reductions could be obtained by using other non-radiation imaging techniques or abstaining from imaging in certain situations with no clear clinical benefit. The discussion following her presentation noted the use of effective dose for the calculations of cancer risk and that mammography is one of the largest contributors to the breast dose but is a small portion of the effective dose. Understanding and accounting for the variation in the CT doses will be an ongoing challenge for epidemiological studies of patients who have undergone these procedures. Presently international studies are under way to follow pediatric patients who have received CT scans.

Medical exposures of the U. S. population: new developments since NCRP Report No. 160

Dr. David Schauer [National Council on Radiation Protection and Measurements (NCRP)] provided information on impact of the NCRP Report No. 160, which reported a sixfold increase in medical radiation exposure to the public between the early 1980s and 2006. Individual annual effective dose from medical radiation in the early 1980s was approximately 0.5 mSv and rose to 3.0 mSv in 2006. The increase in medical radiation exposure in the U.S. could largely be attributed to a dramatic rise in CT and nuclear medicine procedures. In 2006, CT and nuclear medicine represented 22% of all medical imaging procedures, and they contributed 75% of the collective effective dose to patients. Additional reasons for the increased use of medical imaging included financial gain to the ordering physician, reduced effort when conducting physical examinations, and avoidance of potential malpractice lawsuits. He cited evidence from a 2008 GAO report that found physicians who have imaging technology in their offices are 1.7 to 7.7 times more likely to order a scan than physicians in the same specialty that do not self-refer. Congress is questioning the extent of the self-referral problem and the effect on Medicare spending. Other efforts to curb the numbers of medically unjustified scans of all types include additional guidance by the medical community to increase the appropriateness in ordering outpatient imaging, reducing radiation exposure for “high-dose” studies such as myocardial perfusion imaging, and requiring doctors to submit their rationale for ordering a study to an algorithm that will determine whether the test is really indicated. In early 2010 the Food and Drug Administration (FDA) launched an initiative to reduce unnecessary radiation exposure from medical imaging by developing appropriate use criteria, optimizing the safe use of medical imaging devices, and increasing patient awareness of the potential radiation risks. The FDA envisions a tool that will allow patients to track their own medical imaging history. The NCRP is committed to facilitating the use of combined resources to advance radiation protection, to cooperating with national, international and private organizations, and to disseminating information contained in Report 160 as widely as possible. The discussion focused on the undisputed increase in the use of the radiation doses from imaging procedures. It is unclear how knowledgeable most radiologists really are about the current evidence about low-dose health risks.

Concluding comments

In addition to the discussion points noted above, Dr. Schubauer-Berigan described the need for education of radiologists and clinicians on the evolution of the epidemiological and biological evidence regarding cancer risk at lower doses (i.e., the dose at which significant cancer effects have been observed has steadily decreased over the past 50 years). Using ALARA (as low as reasonably achievable) principles has resulted in reductions in dose to nuclear weapons and nuclear power workers to levels well below the current U.S. occupational exposure limit of 50 mSv per year. The NCRP's 1990 Report 107 describes use of ALARA principles to reduce doses to medical workers; however, the use of nuclear medicine and interventional cardiology has grown since then. This fact and the challenges in adequately measuring occupational doses in these workers may indicate the need to revisit the success in applying ALARA within the medical professions. Meanwhile, epidemiological studies of workers in these medical specialties may provide additional evidence regarding low-dose risks of cancer and other diseases.

SESSION 2: CANCER AND NON-CANCER LATE EFFECTS OF THERAPEUTIC RADIATION

Sarah Darby and John Zimbrick, Organizers; Martin Colman, Discussant

As more cancer patients are successfully treated by radiation therapy, their increased longevity results in a growing number of cases in which late health effects, including second cancers and cardiovascular disease, occur. This session was organized to address current issues and concerns regarding some of the most prevalent late effects being observed. The aims were twofold: (1) to understand the current status of contemporary modeling studies being used to predict the risk of normal tissue complications and secondary cancer occurrence and (2) to focus on the results of some selected epidemiological studies of important late effects including those in childhood cancer survivors.

Predicting risk of second cancers following radiotherapy

Dr. Igor Shuryak (Columbia University) began the session with a description of the difficulties inherent in the application of risk estimates obtained from decades-old radiotherapy procedures to contemporary or future radiotherapy methodologies. One solution to this problem is to develop biologically based mathematical models that can be used to predict the risk of secondary cancers from any radiotherapy protocol by using data from older treatment protocols and the literature to obtain estimates of model parameters. When a selected model is deemed to be sufficiently well-calibrated to adequately predict second cancer risk estimates, it can be applied to any current or prospective radiotherapy protocol by use of dose–volume histograms (also discussed later in the session by Dr. Deasy). For clinical use, the model can be incorporated into the radiotherapy treatment planning algorithms, thus allowing the medical physicist planner to select the treatment option with the lowest risk estimate from among those deemed equally effective. One model that appears initially to hold promise is based on combining a “short-term” carcinogenesis model with a “long-term” incidence model. Here a short-term model is defined as one that attempts to predict risk, based on early carcinogenic processes in stem cells, such as initiation, inactivation and repopulation that occur during or shortly after irradiation. A long-term model examines promotion, clonal expansion and transformation parameters over time to predict the probability of a malignancy. These two models were combined and applied to three sets of data for nine cancer sites: breast, lung, stomach, thyroid, pancreas, bladder, brain, colon and rectum. The data sets consisted of (1) background U.S. cancer incidence, (2) radiogenic cancer incidence in Japanese atomic bomb survivors, and (3) radiotherapy-induced second cancer risks from a variety of second cancer studies (e.g., children treated for cancer, Hodgkin lymphoma patients). The model provides good fits for background age-dependent cancer incidence patterns including an exponential rise throughout most of the human life span and a peak/turnover at old age. It also predicts the radiogenic risks adequately. Initial results on second cancers indicate that the model provides reasonable predictions of excess relative risk for breast cancer as a function of radiation dose and dose–volume histograms.

Statistical adventures in normal tissue complication probability modeling

Dr. Joe Deasy (Washington University, St. Louis) focused on the development of normal tissue complication probability (NTCP) models that can be used to help define the tolerance and risk for morbidity associated with various radiotherapy treatment plans. Once developed and validated, such models can be used to help choose a treatment protocol that is acceptable for therapy and also minimizes risk of complications. They can also be used as a tool to guide improvements in treatment plans. Organ radiation tolerance is often a strong function of both the volume of irradiated tissue and the radiation dose delivered to that volume. However, the functional complication end points (such as hypothyroidism) appear to correlate with the mean dose delivered to the tissue volume, whereas the tissue damage end points (e.g., rectal bleeding) appear to relate more directly to the higher local doses given to those specific tissues. Thus the specific organ (e.g., liver) being irradiated may not be uniformly radiosensitive for a given end point over its entire volume, and this presents an obstacle to model development. More information on this possible non-uniformity of response for each organ is needed. Further, the models require image data such as CT scans and delivered dose distributions as well as appropriate software to acquire and manipulate this information. Such software has only recently become available and is expected to facilitate future studies. There are other issues that need to be resolved to improve the statistical analysis of the dose distributions and other parameters needed for model development. These include differences in treatment techniques and patient cohorts as well as inadequate numbers of patients who qualify for the analyses. Given these various issues and challenges, an elementary model has been developed for use in treatment planning to minimize two end points: late rectal bleeding and radiation pneumonitis.

Radiation-associated lung cancer after radiotherapy

Dr. Candace Correa (University of Michigan) described the findings of several major epidemiological studies on the risk of late-onset second lung cancers occurring after radiotherapy of breast cancer. These lung cancers occur because of the incidental irradiation of the underlying lungs when the breast or chest wall is given radiotherapy. The risk is greater for the lung on the same side of the body as the irradiated breast (ipsilateral) compared with that for the lung on the opposite side (contralateral). Among women diagnosed with breast cancer in SEER cancer registries, an elevated lung cancer mortality ratio of 2.0 (1.0–4.0) at 10–14 years after radiotherapy and 2.7 (1.7–4.5) at 15 years after radiotherapy was observed for ipsilateral compared to contralateral second lung cancers. Breast cancer survivors who smoked had a much larger risk of developing second lung cancer than non-smokers, particularly among patients who received radiotherapy. Despite knowledge of these risks, there is insufficient information to reliably reduce the radiotherapy-induced incidence of latent second breast cancers. While radiotherapists often limit the amount of lung irradiation by dose–volume metrics for protection against radiation pneumonitis, they have no reliable dose–volume limits for use in minimizing second lung cancers. Present and future radiotherapy methodologies that use CT treatment planning for breast cancer irradiation involve smaller dose–volume distributions in the lungs than were previously used, and these dose–volume distributions can be accurately delineated and quantified. To study correlations between these dose–volume distributions and the incidence of second cancers, Dr. Correa and colleagues have undertaken a study of approximately 5100 stage I–II breast cancer patients who received treatment from 1972–2008 and will identify histologically verified cases of second lung cancers diagnosed ≥2 years after radiotherapy. Medical records are being reviewed for smoking history, second lung cancer stage, date and cause of death, pathology, medical oncology, radiotherapy and radiology information. The type of radiotherapy used (IMRT or conventional, etc.) and the spatial dosimetry will be examined. The goals of the study are to (1) describe the incidence and latency of second lung cancer with modern treatment regimens, (2) determine the effect of smoking intensity plus radiotherapy on the risk of second lung cancers, (3) examine the relationship between the second lung cancer and areas of the lung that were irradiated (including ipsilateral compared to contralateral lung cancers and the anatomic location of the ipsilateral cancers as related to the radiation fields in those areas), and (4) investigate the potential association between radiotherapy treatment parameters and the amount of lung tissue irradiated (dose–volume). The results of the study should allow the development of new radiotherapy regimens that are effective but also minimize the risk of second cancers and help patients to adopt life styles (e.g., smoking cessation programs) that will reduce their individual risks for the development of second lung cancers.

Dose–response relationships for radiation-induced heart disease

Dr. Sarah Darby (Oxford University, UK) summarized the epidemiological evidence for radiation-induced heart disease after radiotherapy for breast cancer in patients who received cardiac doses ≤20 Gy. Additional evidence is found in studies of the atomic bomb survivors, which have shown that whole-body radiation doses up to 4 Gy resulted in latent adverse cardiovascular effects. Dr. Darby described several studies on long-term survival of breast cancer patients based on clinical trials in which all women received similar surgical and drug treatment, and then half were allocated at random to receive radiotherapy. An early meta-analysis of these trials was published in 1987. Survival beyond 10 years was significantly worse for those receiving radiotherapy. Although this early study was unable to identify the disease responsible for this increased mortality, a later meta-analysis by the Early Breast Cancer Trialists' Collaborative Group (EBCTCG) showed that mortality from heart disease was increased by 27% (P  =  0.0001) in women randomized to surgery plus radiotherapy compared with women randomized to surgery alone. Most of the increase in mortality was due to progressive coronary artery disease. Several aspects of the radiotherapy treatment contributed to the mortality including field placement near the heart, orthovoltage radiation that delivered high doses to the anterior part of the heart, large daily fractions and high total doses. A recent preliminary analysis of updated EBCTCG data from over 30,000 breast cancer patients followed for up to 20 years showed a statistically significant dose response that yielded a risk of death from heart disease of 3%/Gy (95% CI, 2%–5%; P  =  0.0001). The risk is considered as approximate because individual treatment plans were not available for the patients in these trials. Nevertheless, the analysis provides good evidence that the risk of late-onset radiotherapy-induced heart disease is directly related to cardiac dose. More robust evidence of this relationship comes from studies in which women receiving radiotherapy for left-sided breast tumors are compared with women given radiotherapy for right-sided tumors. Radiotherapy of left-sided breast tumors usually results in larger cardiac doses than those delivered from radiotherapy of right-sided tumors. A recent study that examined the incidence of coronary artery disease after breast radiotherapy revealed a higher prevalence of stress test abnormalities in left-sided than in right-sided breast cancer patients given radiotherapy (59% compared to 8%; P  =  0.001). Among left-sided breast tumor patients, the disease distribution differed from that expected in control women, with a preponderance of left anterior descending artery disease. The anterior portion of the heart and the left anterior descending artery region are the portions of the heart most often within the tangential radiation fields used to treat breast cancer. This finding provides direct evidence of a causal effect of radiotherapy on the development of coronary artery disease.

Radiation-related second cancers and cardiovascular outcomes in the Childhood Cancer Survivor Study

Dr. Peter Inskip (National Cancer Institute) described the major results from a continuing study of second cancers and other late effects in childhood cancer survivors. The Childhood Cancer Survivor Study (CCSS) includes more than 14,350 5-year survivors diagnosed during the period 1970–1986 for whom there is detailed information on cancer treatment, a comparison group of nearly 4,000 siblings, and long-term follow-up. Studies of second cancers have demonstrated a central role of radiation therapy in the occurrence of new solid cancers. Detailed studies of second cancers of the central nervous system (CNS), breast and thyroid gland have shown a variety of dose–response relationships and various patterns of risk modification by host factors. For example, dose–response relationships for CNS and breast cancers are consistent with a linear function for doses up to 40+ Gy, whereas the dose response for thyroid cancer increases with dose up to 15–20 Gy and then decreases, most likely due to cell killing. Mantle radiotherapy for Hodgkin lymphoma is particularly strongly associated with risk for second breast and thyroid cancers. This treatment also places the heart and lung at risk for non-cancer complications. A summary of the quantitative late effects found thus far include 470 second cancers with a relative risk (RR) of 2.9 comparing irradiated and nonirradiated childhood cancer survivors, 142 cardiac events, with an RR of 3.3, and 67 pulmonary events, with an RR of 1.4. These data indicate that cardiovascular events are the leading non-cancer cause of death among survivors of childhood cancers, with a sevenfold increase in risk relative to the general U.S. population. Increased risks include congestive heart failure, myocardial infarction, pericardial disease and valvular abnormalities, with onset beginning at a young age and risk continuing to increase up to 30 years after diagnosis. Each disease exhibits a positive association with increasing dose for doses ≤15 Gy. The study also shows that radiation may indirectly influence the risk of second cancers and cardiovascular disease. For example, high-dose ovarian irradiation appears to lower the risk of radiation-related breast cancer, most likely by suppressing the stimulatory effect of ovarian hormones. The CCSS cohort is still relatively young and just entering the age range in which the incidence of cancer and cardiac disease tends to increase in the general population. It will be important to determine whether the high relative risks seen at younger ages extend into adulthood, in which case the absolute risks may be considerable.

Concluding comments

In the concluding discussion, Dr. Martin Colman commented that the studies on lung cancer risk after radiotherapy for breast cancer may provide data to address the impact of smoking cessation prior to radiotherapy and the potential value of medical counseling to reinforce smoking cessation behavior. Medicare currently approves and provides extra compensation for physicians who provide smoking cessation counseling, but it would be useful to establish whether the reduction or cessation of smoking leads to improved treatment outcomes. With regard to latent cardiovascular effects, Dr. Colman raised the question of the impact of modern radiotherapy methodologies involving IMRT, which focus the radiation dose more precisely in the target volume but at the same time increase the low-dose “bath” that envelops the normal tissues including the heart. He noted that important differences in mortality from radiotherapy after mastectomy compared to breast-conserving surgery were presented during the session and could result from variations in treatment techniques. The impact of modern techniques of three-dimensional conformal radiotherapy and IMRT, introduced over the past 15 years, will become apparent over the next 10–15 years. Tangential breast radiation fields have been changed over time, and earlier radiotherapy techniques probably included more heart volume. Dr. Colman also commented on the results of the CCSS showing that the dose–effect response for second thyroid cancers increased with doses up to 15–20 Gy and then decreased. These cancer survivors had been treated for Hodgkin lymphoma as well as for benign conditions in the head and neck region such as thymus enlargement and tonsillitis. The dose–effect curves are very similar in shape to those obtained many years ago by Dr. Arthur Upton for radiation-induced leukemia, implying an impact of reduced surviving populations of cells at risk with higher radiation doses.

SESSION 3: GENETIC SUSCEPTIBILITY TO RADIATION EFFECTS

Daniel Stram and Parveen Bhatti, Organizers; Harry Cullings, Discussant

This session addressed the current state of knowledge regarding individual variability in susceptibility to radiation effects and statistical and epidemiological approaches for learning more about genetic determinants of radiation susceptibility. The session focused on (1) the recent growth of knowledge about genetic causes of diseases (and specifically cancer) coming from large-scale genetic association studies, (2) the possible implications of these new findings on susceptibility, and (3) current attempts to look directly at how variation over the entire genome interacts with radiation exposure to influence cancer risk.

The life and times of genome-wide scans

Dr. Christopher Haiman (University of Southern California) updated the participants on recent findings from genome-wide association studies (or GWAS) of common variation and cancer risk. He gave a basic outline of the design and (very large) sample size requirements of GWAS and described some recent studies involving as many as 19,000 people diagnosed with specific cancer types and as many unaffected controls. He contrasted these massive studies with the far smaller linkage and candidate gene studies of these same diseases that predated the GWAS era but that tended to give inconsistent and poorly reproducible results. Dr. Haiman noted that GWAS of cancer have produced over 100 solid reproducible findings regarding single nucleotide polymorphisms (SNPs) with alleles that have consistently been found to be more common in cases than controls and therefore associated with increased risk of various cancers. He highlighted prostate cancer as an example where GWAS have been notably successful in finding risk alleles (over 30 findings to date) and explain an important fraction (∼20%) of the apparent heritability (e.g. familial aggregation) of the disease. He described GWAS findings as providing new and often unexpected biological insight into the genes and pathways that are unambiguously important in disease etiology. He highlighted two such findings: one involving the discovery of a non-protein coding region (on chromosome 8q24) containing multiple alleles that are reproducibly associated with the risk of many common cancers, and a second (a single SNP in the HNFB1 gene) where the same allele is associated with both an increased risk of prostate cancer and a decreased risk of type 2 diabetes. He partially characterized a number of other findings of risk alleles in breast and prostate cancer that are either (1) in genes that seem (at least in hindsight) to have a strong known biological rationale for being associated with cancer risk or (2) quite unexpected and with unknown annotation regarding function and link to cancer. Dr. Haiman concluded by showing that the 8q24 region contained prostate cancer risk alleles that had only been discovered by examining non-European ancestry racial/ethnic groups directly; i.e., the region contained risk alleles that were not present in high frequency in Europeans yet contributed significantly to the prostate cancer risk in several other groups. He argued that expanding the GWAS approach beyond people of European ancestry, by far the most commonly studied group to date, will be important in future discovery efforts.

Statistical methods in genome-wide association studies related to the search for gene and radiation interactions

Dr. Daniel Stram (University of Southern California) agreed with Dr. Haiman that genome-wide association studies (GWAS) of cancer have led to a number of highly reproducible findings regarding the influence of common alleles on cancer risk for many cancers. He stated that broadening GWAS to encompass alleles that are involved in gene–environment interactions is one of the most important tasks leading to understanding the full range of genetic influence on cancer risk and for adding to our basic knowledge of cancer etiology. He also argued that for radiation epidemiology the discovery of alleles that modify individual genetic sensitivity to the carcinogenic effects of radiation exposure has potential consequences for radiation protection policy. A number of methodological issues have relevance to the significance of GWAS findings and radiation epidemiology, including (1) the scale upon which “interactions” between radiation and genetics are to be defined; more generally, what the synergy is between radiation exposure and genetic variation; (2) the relative impact of rare and common risk alleles on cancer risk generally and upon individual susceptibility to radiation specifically; (3) the suggestion (at least in certain cancers) that many hundreds of common risk alleles may ultimately be found to be involved in disease risk, with many more genes having small effects on risk rather than large effects; (4) whether the influence of many risk alleles with small effects can be combined when addressing susceptibility to radiation; (5) whether gene–gene interactions need to be taken into account when evaluating gene–radiation interactions; and (6) whether results from studies of genetic susceptibility to high doses (e.g. therapeutic) of radiation can be interpolated to risk from low doses. Regarding point (3), Dr. Stram noted that only a few studies (with the WECARE Study, also described in this session, being a leading example) have the sample size, exposure levels and resulting statistical power to investigate gene–radiation interactions directly on a genome-wide scale to discover variants that affect only genetic susceptibility to cancer (or other diseases) in the presence of radiation. However, he argued that many other studies may have the ability to assess other aspects of interactions between radiation and genes. Dr. Stram noted that to date there has been a remarkable similarity in the effects on risk of cancer conferred by the common alleles found by GWAS; i.e., they seem to increase risk multiplicatively with the risk increasing by between 10–20% per copy of each risk allele carried by an individual. In addition, very simple aggregate risk scores (simple counting of the number of risk alleles across all risk-associated loci) seem to model the joint affects of the risk variants well. He suggested that constructing risk scores and testing whether they appear to interact with radiation exposures will be very important and that many studies will have the power to perform this sort of analysis even if they are too small to probe the whole genome directly for gene–radiation interactions. He noted that the scale (e.g. multiplicative, additive, etc.) on which such risk scores synergize with radiation is a question that bears close scrutiny. For example, the lack of any interactions (on a multiplicative scale) between risk score and radiation could itself imply that most excess cases of disease caused by radiation occur among the genetically susceptible, and this may ultimately prove to be of importance in radiation safety. Dr. Stram concluded his presentation with a discussion of various study designs and analysis methods, including counter-matching and case-only analysis, for the detection of multiplicative interactions between radiation and risk alleles or risk scores.

Joint roles of radiation and genetic susceptibility in the etiology of second primary breast cancer: a report from the WECARE Study

Drs. Jonine Bernstein (Memorial Sloan-Kettering Cancer Center) and Duncan Thomas (University of Southern California) collaborated on a report from the WECARE Study. WECARE is a study of the occurrence of second primary contralateral breast cancers (CBC) in women treated by chemotherapy and/or radiation for a first breast cancer. The WECARE Study includes both a study of candidate genes, including genes in the DNA double-strand break repair pathway, and, more recently, a genome-wide investigation of CBC risk. The WECARE Study is specifically designed to use counter-matching to enhance power to investigate interactions between genes and radiation exposure in influencing (second) breast cancer susceptibility. Dr. Bernstein summarized what is currently known about risk of a second breast cancer in women treated for an initial primary cancer and outlined the design and current status of the WECARE Study. She described recently published results that found an interaction between radiation exposure to the contralateral breast at time of treatment of the first cancer and a subset of variations in the ATM gene. She also described the two-stage design of the ongoing WECARE GWAS. In stage 1, 1 million SNPs were genotyped in 708 women with CBC and 1,399 women with an initial breast cancer without CBC. In stage 2, SNPs that appear to be related to either CBC risk marginally or via an interaction with radiation will be followed up in another 1000 women with CBC and 1000 breast cancer cases without CBC. Dr. Bernstein indicated that the genotyping for stage 1 provided some suggestive findings both for the main effects of several SNPs in relation to risk of a CBC and for interactions with radiation. She also gave results for eight previously discovered GWAS SNPs from studies of initial breast cancers, finding that seven of them were significantly associated with risk of CBC. No interactions between radiation and these SNPs were detected in the initial analyses. Dr. Thomas presented results of a multivariate logistic regression analysis for the relationship between CBC occurrence and SNPs found in genes in the DNA double-strand break repair pathway. He showed suggestive effects of approximately 40 SNPs found in this pathway when using a gene-based trend analysis. Dr. Thomas described a number of further statistical approaches designed to incorporate prior information about interactions between genes that operate in known pathways into genetic analysis, with these methods being used in an ongoing and future planned analysis of the WECARE Study.

SNPs from GWA studies, what are their implications for understanding individual differences in susceptibility to late effects of radiation exposure?

Dr. Parveen Bhatti (Fred Hutchinson Cancer Research Center) discussed the reasons for studying gene–radiation interactions, including the discovery of new biological mechanisms and identification of susceptible subpopulations, the latter of which bears the promise of personalized medicine. He went on to discuss how GWAS are, for the first time, providing highly reproducible genetic associations with disease, allowing researchers to explore gene–radiation interactions. He described a search for effect modifications of breast cancer-related SNPs within the U.S. Radiologic Technologists (USRT) cohort study, which contributed cases and controls to the Cancer Genetic Markers of Susceptibility (CGEMS) breast cancer GWAS, with suggestions of gene by radiation interactions for two markers, i.e. SNPs in the MRPS30 gene (of unknown specific function but related to apoptosis), and in the RAD51L1 gene (involved in DNA DSB repair). Even the USRT study, however, with 859 and 1083 cases and controls, is underpowered to examine gene–radiation interactions; very large-scale pooled analyses would be required to provide definitive interaction analysis. Dr. Bhatti suggested that the search for gene–environment interactions may have to begin with much more common cancer-causing exposures than radiation because there are very few study populations with biospecimens in addition to well-characterized radiation exposures. He noted that SNPs have been found in a GWAS of lung cancer in a region of chromosome 15 that contains nicotinic acetylcholine receptor genes (among others) and these SNPs may be the first examples of gene–environment interactions in cancer to be discovered by GWA studies. It is unclear, however, whether these SNPs are related to lung cancer susceptibility directly or indirectly via smoking behavior.

Concluding comments

In addition to summarizing the basic points above, Dr. Cullings [Radiation Effects Research Foundation (RERF)] briefly reviewed the scope of work that has been done at RERF, where the effort is devoted primarily to elucidating radiation effects, and a range of more fundamental work has been completed in addition to candidate SNP-type genetic association studies of site-specific cancers. For example, considerable work has been accomplished on measuring stable chromosomal aberrations in cultured lymphocytes at long times after the donor's radiation exposure from the atomic bombs, and these assays show a considerably larger variation than would be expected from the sampling statistics associated with the limited number of cells being scored. A major contribution of chromosomal aberrations as a biodosimetry tool would be to relate this extra variation to errors in the survivors' radiation dose estimates, but this has been complicated by a concern about the extent to which it might reflect individual susceptibility to induction of chromosomal aberrations. Some other assays such as the micronucleus assay are even more directly relevant to individual susceptibility because they have the aspect of a controlled experiment in which a known radiation dose is administered, but they are performed in extracted cells and not in the intact person. Both of these assays have the limitation that they only measure persistent damage to DNA, which is not equivalent to measuring the real end point of interest, which is induction of cancer. RERF has a collection of biospecimens and excellent follow-up of cancer incidence; therefore, an important consideration in RERF's future planning is the extent to which array platforms should be used to attempt genome-wide association studies of cancer.

SESSION 4: NEW TECHNOLOGIES IN RADIATION MEDICINE

Martin Colman and Amy Kronenberg, Organizers; Martin Colman and Amy Kronenberg, Discussants

This session was intended to provide an introduction to new technologies in use or about to be in use in radiation medicine and included three presentations on advanced technologies for external-beam delivery, brachytherapy and charged-particle therapy. Integral to the consideration of the implementation of current and new technologies is the biological impact of these applications. Two presentations were included to address the potential health impact of increasing use of ionizing radiation in diagnostic applications and possible health effects related to charged-particle exposures during space flight.

Radiotherapy technology: form follows function

Dr. T. Rockwell “Rock” Mackie (University of Wisconsin and Tomotherapy, Inc.) described the evolution of radiation therapy and three-dimensional treatment planning and the evolution of technology to improve dose distributions and reduce radiation damage to normal tissues. He described adaptive radiation therapy, which allows for changes in motion and tumor response. He described his invention, the Tomotherapy device, which adapted and merged the technologies of radiologic imaging and radiation therapy, combining a compact linear accelerator with the slip-ring gantry of a CAT (computerized axial tomographic) scanner, resulting in a treatment device that can be used for simultaneously imaging, treating and adapting the treatment plan for changes in imaging over time.

Electronic brachytherapy: the technology and implications for patient treatment

Dr. Tom Rusch (X-oft Corporation) described the evolution of brachytherapy over time. Brachytherapy traditionally involves the use of sealed sources of radioactive materials applied in contact with tissues or implanted into tissues or body cavities to deliver radiation doses to malignant tumors. Potential problems in brachytherapy focus on the safe handling of the radioactive sources. He described the developing area of electronic brachytherapy in which miniaturized X-ray-generating devices that are used similarly to radioactive sources but can be activated or inactivated by turning the electric power supply on or off, increasing the safety of brachytherapy techniques to patients, family members and medical staff.

The advantages of proton therapy over conventional radiation therapy for pediatric CNS malignancies

Dr. Robert “Rusty” Marcus (University of Florida and Florida Proton Therapy Center) described the development of proton-beam radiation therapy, the potential benefits of improved dose distributions offered by proton beams compared to widely used photon beams, and the specific advantages of proton-beam radiation therapy in the treatment of children. He noted the expense of proton beam devices, the fact that there were probably many medical situations where conventional radiation therapy may be equal to proton beam radiation therapy, and the need to limit the use of such a scarce resource to those patients who are likely to benefit. He showed information that indicated that there was no dose threshold for cognitive function impairment, and any reduction in dose to normal brain was likely to benefit children being treated for brain tumors; he concluded therefore that children in need of radiation therapy may represent a group of patients who are likely to benefit from proton-beam therapy.

Projected cancer risks from current levels of diagnostic medical imaging in the U.S

Dr. Amy Berington-de Gonzalez [National Cancer Institute (NCI)] presented an update of the ongoing work at the NCI considering the risks to the general population from the increasing use of diagnostic X rays and nuclear medicine scans. Detailed estimates of the frequency of diagnostic medical radiation exposures were taken from NCRP Report 160, Ionizing Radiation Exposure of the Population of the United States 2009. These data together with organ-specific dose estimates and organ-specific lifetime risk models were used to predict future cancers that may result from these diagnostic medical procedures. Dr. Berrington-de Gonzalez focused on the projected risks from cardiac scans, which account for 85% of the collective dose from nuclear medicine scans and considered single-isotope procedures and multiple-isotope procedures. She also illustrated the future cancer risks from computed tomography for different segments of the U.S. population. The new NCI risk calculator was introduced, which uses Monte Carlo simulation methods to estimate lifetime risks, taking into account multiple organ exposures to calculate total risks with uncertainty intervals for different segments of the population.

56Fe radiation increases adhesiveness of aortic endothelium and accelerates atherosclerosis

Dr. Dennis Kucik (University of Alabama Birmingham) introduced the concerns for astronaut health associated with exposure to charged-particle radiations, in particular, those from the galactic cosmic radiation. Energetic iron ions are thought to contribute substantially to the effective dose equivalent. Although it is well known that radiation exposure from other sources is associated with an increased risk of atherosclerotic disease, little information is available regarding the risks associated with heavy-ion exposures. Dr. Kucik reviewed his ongoing work with ApoE−/− mice, which develop atherosclerosis in a similar manner to humans as they age, without special diets or other interventions. Limited exposure of the upper aorta and the carotid arteries to energetic iron ions was associated with an increase in atherosclerotic disease. To test the hypothesis that inappropriate adhesion of immune cells might drive this increase in atherosclerosis after exposure to iron ions, an assay of endothelial adhesiveness was designed and implemented using a flow chamber with leukocytes moving across irradiated human aortic endothelium in culture. The results demonstrated that both iron ions and X rays increased the adhesiveness of aortic vascular endothelium on a time course compatible with their working model. Current studies are geared to the identification of the mechanisms underlying this increased adhesiveness, focusing on intercellular signaling between the irradiated endothelium and the circulating leukocytes. The goal is to understand the underlying mechanisms driving increased leukocyte adhesion and to design novel strategies to intervene in this process in astronauts exposed to space flight radiation and for earthbound individuals who may be exposed to more conventional sources of ionizing radiation in diagnostic or therapeutic medical procedures or in accident scenarios.

Concluding comments

The modern practice of radiation therapy for cancer has become increasingly more complex over the past 30 to 40 years. Survival rates for most cancers have improved dramatically as a result of improvements in surgery, chemotherapy and radiation therapy, and most importantly through the multidisciplinary cooperation between various sub-specialty oncology groups to optimize patient care. Radiation oncologists have been very conscientious of the potential injurious effects of radiation therapy, and particularly of the role of cancer as a late effect, and are constantly searching for ways to reduce risk. The careful balance of risks and benefits guides just about everything that radiation oncologists do on a daily basis, and the dramatically reduced rates of use of ionizing radiation in the treatment of benign conditions reflects that process. It is no longer considered reasonable or “standard-of-care” to treat arthritic, inflammatory, allergic and non-life-threatening benign tumors, particularly in young people, with radiation therapy. In parallel, the changes in techniques over time, motivated by reduced risk of second cancers, can be demonstrated in many instances. In summary, the new technologies discussed here are directed toward reducing the hazards of radiation dose delivery through both confining and more accurately defining the target volumes for irradiation, improving the desired therapeutic impact, and optimizing protection of surrounding normal tissues. Apart from therapeutic aspects of radiation exposure, individuals are exposed to ionizing radiation in diagnostic procedures and in the workplace, including in space-flight scenarios. The goal remains to understand and reduce the risks associated with those exposures when they do occur.

SESSION 5: LOW DOSE RADIATION EFFECTS

Sally Amundson and Peter Jacob, Organizers; Jerome Puskin, Discussant

The goal of this session was to provide an update of a broad range of low-dose ionizing radiation studies spanning molecular signaling to epidemiology. Low-dose and low-dose-rate exposures are the most relevant for the majority of exposed populations, from radiation workers to patients undergoing diagnostic procedures. However, it is often difficult to study the effects of these exposures directly, and historically both our understanding of the molecular effects and the setting of risk estimates have come largely from extrapolation from high-dose acute exposures. Over the past decade, it has become increasingly obvious that direct extrapolation is not appropriate, because different mechanisms appear to be involved in the low-dose range. These may include signaling between directly irradiated and nonirradiated cells, adaptive protection against subsequent radiation damage, and differential reprogramming of cells and tissues through alterations in gene expression. New modalities enabling broad molecular characterizations are now being used for the study of low-dose effects, expanding our understanding of what alterations may affect disease progression and risk. This session highlighted several new areas of low-dose and low-dose-rate radiation studies and concluded with a review of data emerging from these studies. The future integration of modern molecular techniques with large-scale epidemiological studies of appropriate populations will likely be most illuminating in the pursuit of the true risks of low-dose, low dose-rate radiation exposures.

Signaling pathways in the bystander effect: Relevance of oxidative stress and very low fluences of particle radiations

Dr. Kathy Held (Massachusetts General Hospital in Boston) addressed radiation-induced bystander responses, i.e., the occurrence of biological changes in unirradiated cells in the proximity of or sharing medium with cells that have been traversed by ionizing radiation. An important characteristic of bystander effects is that the responses occur at low radiation doses (<0.05 Gy), increase rapidly with dose, and then reach a plateau, usually by 0.1 to 0.3 Gy. Dr. Held's group has shown that when normal human fibroblasts are irradiated with 0.1 to 0.5 Gy X rays or with 1 GeV/nucleon iron ions or protons, bystander cells exhibit increased DNA damage in the form of γ-H2AX foci or micronuclei and increased reactive oxygen species (ROS) generation. With all three radiation types, the bystander responses are decreased by addition of catalase, SOD or c-PTIO to the shared medium, indicating roles for hydrogen peroxide, superoxide and/or NO in the signaling. New data with 1 GeV/nucleon iron ions (LET of 151 keV/µm) show no significant increase in micronuclei in irradiated or bystander cells at fluences where 1% or less of the cells are traversed by a particle, but at fluences where 2% or more of the cells are traversed, there is a statistically significant ∼1.5-fold increase in micronuclei in both bystander and irradiated cells. The level of damage remains constant at that magnitude to a fluence where ∼50% of cells were traversed by an ion, above which the percentage of cells with micronuclei increases sharply with dose in irradiated cells but does not increase in bystander cells. Similarly shaped dose–response curves occur with 1 GeV protons (LET of 0.24 keV/µm), although the plateau in the responses for irradiated and bystander cells shifts to higher fluences. These data suggest that the increase of damage in the irradiated population at low doses is a result of bystander signaling rather than a direct effect of radiation on traversed cells. The bystander signaling was found to depend on cell type, radiation quality and end point.

Examining the effects of low-dose ionizing radiation on the epigenome

Autumn Bernal (Dr. Randy Jirtle's laboratory at Duke University) presented the topic of epigenetic modification of DNA and its alteration by low-dose radiation exposures. The epigenome refers to the sum of heritable alterations in DNA that do not involve base sequence changes and includes histone modifications, DNA methylation and chromatin conformation changes. The majority of epigenetic studies have focused on DNA methylation and two major types of methylated genes. The first is imprinted genes, where one allele is methylated and effectively silenced during development. Loss of imprinting at specific loci can occur in response to environmental insults and can have serious consequences, including developmental or neurological disorders and carcinogenesis. The second class of commonly methylated genes is metastable epialleles. These genes are also regulated by epigenetic “marks” established early in development, but in this case, the patterns of methylation are stochastic and can vary even among genetically identical individuals. These stochastic methylation patterns can alter an individual's risk of disease and are also subject to alteration in response to environmental exposures. Such findings have made the plasticity of the epigenome under environmental stress, including ionizing radiation, a topic of interest and concern. Ms. Bernal and the Jirtle laboratory have been investigating the effects of low-dose ionizing radiation using the Agouti viable yellow (Avy) mouse model. This mouse acts as a sensitive detector of environmentally induced epigenetic changes, due to a metastable epiallele that results in a range of coat colors and degrees of obesity. Exposure to ionizing radiation in utero resulted in increasing methylation of the Avy locus and concomitant darkening of coat color and slimmer mice. The increased methylation was found to be significant after doses as low as 1.2 cGy. Ongoing work includes definition of mouse and human “imprintomes” using computational methods to predict all the genes potentially regulated by methylation in the genome. This information is expected to aid in the characterization of epigenetic changes induced by low-dose radiation on a global level, thus providing potential insight into the disease processes that may be affected and helping to connect epigenetic mechanisms with actual risk.

Radiation metabolomics

Dr. Albert J. Fornace, Jr. (Georgetown University) introduced metabolomics and discussed its early application to ionizing radiation studies. After the development of techniques for the study of global genomics, transcriptomics and proteomics, techniques for the study of small molecule metabolites are now also being developed and refined. The subsequent identification of individual metabolites and multivariate data analysis are key to the success of these techniques, and appropriate informatic methods for downstream data analysis are still evolving. Although the “metabolome” remains incompletely characterized, these small molecules can be quantified in diverse biofluids, including urine, sweat, saliva and blood serum. Characteristic alterations in metabolite profiles have been associated with various disease states, including cancer, and may provide sensitive biomarkers of early disease or recurrence. Alterations have also been documented by environmental exposures, dietary factors, cigarette smoke and, in recent work from Dr. Fornace and collaborators, ionizing radiation. One area of interest in this field has been the development of metabolomic markers for use in high-throughput biodosimetry. Dr. Fornace described radiation dose-dependent metabolomic changes occurring in various biofluids from multiple animal models, as well as in human patients undergoing total-body irradiation. Distinct metabolomic changes have been detected within hours of exposure to doses as low as 0.5 Gy, and some alterations persist for at least 30 days. Although studies in this field to date have focused on higher radiation doses, it has great potential for low-dose studies. The high-throughput and non-invasive nature of sample collection may even make it feasible for epidemiological studies. Future efforts will involve the continued development of more refined multivariate analysis approaches and the integration of data from the genomic, transcriptomic, proteomic and metabolomic levels to provide a systems-level view of radiation responses after both high- and low-dose exposures.

Solid cancer after exposures with doses corresponding to the dose limits for radiation workers

Dr. Peter Jacob (Helmholtz Zentrum München, Institute of Radiation Protection) addressed solid cancer risks after occupational exposures to ionizing radiation that mainly occur at low dose rates and may accumulate to produce effective doses of up to several hundred mGy (moderate doses). It is presently assumed that the cancer risk per unit dose after such exposures is smaller than that observed in the Life Span study (LSS) for the atomic bomb survivors from Hiroshima and Nagasaki, who received acute, moderate- to high-dose exposures. To evaluate the evidence of cancer risks from low-dose-rate, moderate-dose exposures to ionizing radiation, a literature search for primary epidemiological studies on cancer incidence and mortality risks was performed. The analysis was restricted to studies reporting estimates of the excess relative risk (ERR) per unit dose for specific cancers. For each of these studies the risk in the LSS was calculated for the same types of cancer with the same gender proportion and matched quantities for dose, mean attained age and mean age at exposure. Generally, the ERR per unit dose in the low-dose-rate, moderate-dose studies was larger than or similar to the corresponding estimate for the atomic bomb survivors. Overall, the ratio of the ERR per unit dose in the low-dose-rate, moderate-dose studies to the corresponding quantity in the LSS was 1.53 (95% CI: 1.00; 2.26). Dr. Jacob concluded that the presented analysis does not confirm that the cancer risk per unit dose for low-dose-rate, moderate-dose exposures is lower than that in the LSS, and this challenges the cancer risk values currently assumed for occupational exposures. This particularly applies to the use of a dose and dose-rate effectiveness factor (DDREF) for low-dose-rate exposures. Although such a factor may be present in radiobiological experiments, its application to the transfer of risk values from the LSS to protracted occupational or medical exposures in populations that have a different lifestyle and genetic background than the atomic bomb survivors, is questioned.

Concluding comments

Dr. Jerome Puskin (Radiation Protection Division, U.S. Environmental Protection Agency) presented the perspective of a regulatory agency on the speakers' research. He began by noting that the assessment of risks at the very low dose rates relevant to environmental exposures will probably always require an extrapolation from results of studies carried out at higher dose rates. He then underscored recent findings from epidemiological studies of cohorts receiving chronic radiation exposures, discussed by Dr. Jacob, which suggest, so far, that the risk per unit dose at low dose rates is comparable to what has been estimated for acute doses greater than 0.1 Gy. Dr. Puskin also acknowledged that results from radiation biology experiments indicate the existence of complex mechanisms operating at low doses that might significantly alter the response at very low dose rates, but he concluded that they provide no basis, at this point, for a regulatory agency to modify its assessment of risks from chronic, low-dose radiation.

SUPPLEMENTARY INFORMATION

Extended abstracts from the 2010 Conference on Radiation and Health are available at  http://dx.doi.org/10.1667/RR2435.1.S1 (10.1667_RR2435.1.S1.doc).

Acknowledgments

Organizing Committee:

Dan Stram, Co-Chair, University of Southern California

Alice Sigurdson, Co-Chair, National Cancer Institute

Sally Amundson, Columbia University

Parveen Bhatti, Fred Hutchinson Cancer Research Center

Alina Brenner, National Cancer Institute

Martin Colman, University of Texas Medical Branch-Galveston

Sarah Darby, Oxford University

Albert Hyacinth, Centers for Disease Control and Prevention

Peter Jacob, Helmholtz Zentrum Munchen

Ruth Kleinerman, National Cancer Institute

Amy Kronenberg, Lawrence Berkeley National Laboratory

Mark Pearce, University of Newcastle

Jerome Puskin, Environmental Protection Agency

Winston Richards, Pennsylvania State-Harrisburg

Mary Schubauer-Berigan, National Institute for Occupational Safety and Health

Lydia Zablotska, University of San Francisco

John Zimbrick, Colorado State University

Sponsors:

U.S. Centers for Disease Control and Prevention-National Center for Environmental Health, National Institute for Occupational Safety and Health (Divisions of Compensation Analysis & Support and of Surveillance, Hazard Evaluations, and Field Studies)

U.S. Department of Energy, Office of Science and Environmental Research

U.S. Environmental Protection Agency, Office of Air and Radiation

Radiation Epidemiology Branch, Division of Cancer Epidemiology and Genetics, National Cancer Institute, NIH, DHHS

Radiation/Nuclear Countermeasure Program, National Institute of Allergy and Infectious Diseases, NIH, DHHS

Canadian Nuclear Safety Commission

"Radiation Exposures in Medicine: Biological and Public Health Significance," Radiation Research 175(1), 131-142, (1 January 2011). https://doi.org/10.1667/RR2435.1
Published: 1 January 2011
Back to Top