Open Access
How to translate text using browser tools
2 July 2024 Dosimetry: Was and Is an Absolute Requirement for Quality Radiation Research
Daniel Johnson, H. Harold Li, Bruce F. Kimler
Author Affiliations +
Abstract

This review aims to trace the evolution of dosimetry, highlight its significance in the advancement of radiation research, and identify the current trends and methodologies in the field. Key historical milestones, starting with the first publications in the journal in 1954, will be synthesized before addressing contemporary practices in radiation medicine and radiobiological investigation. Finally, possibilities for future opportunities in dosimetry will be offered. The overarching goal is to emphasize the indispensability of accurate and reproducible dosimetry in enhancing the quality of radiation research and practical applications of ionizing radiation.

INTRODUCTION

Radiation dosimetry is an integral aspect of radiation research. It is the essential link between physical measurements and biological effects, underpinning the efficacy, safety, and development of radiation biology and radiation-based medical practices. We present a comprehensive exploration of dosimetry's vital role in radiation research, tracing its history, present application, and forward-looking prospects, emphasizing the indispensable nature of accurate dosimetry in ensuring the quality and safety of radiation research and practice.

As dosimetry is the quantitative measurement of the energy imparted by ionizing radiation to matter, it remains an absolute necessity in radiation research, ensuring the accuracy of dose delivery and allowing for the rigorous assessment of radiation's biological effects. It involves determining the quantity of absolute dose, or the amount of radiation energy absorbed per unit mass of an object and seeks to establish the correlation between the physical properties of radiation and its observed effects on biological systems (1). Clinical radiation medicine relies heavily on dosimetry to optimize radiation therapy treatments, protect patients and personnel from undue exposure, and advance scientific understanding in radiobiology, radiation chemistry, and radiation physics.

The importance of radiation dosimetry to clinical medical physics is multifaceted. Accurate dosimetry enables the translation of radiation measurements into meaningful dose quantities that can be compared against biological outcomes, such as cellular damage and repair, carcinogenesis, and other critical endpoints.

While the advantages gained through the implementation of diagnostic X ray and nuclear medicine imaging clearly outweigh the risks posed by the ionizing radiation employed, it is in the best interest of the patients to minimize this risk through the minimization of dose. To this end, it is only through the development of radiation detection tools and techniques that the radiation utilized can be both quantified and applied to predictive biological models.

In the realm of therapeutic applications, dosimetry is essential for the planning and delivery of radiation treatments. It ensures that the prescribed dose is delivered to the target volume with high precision, while minimizing the dose to surrounding healthy tissue and critical structures. This precision is achieved through the use of sophisticated dosimetry techniques and advanced computational algorithms that account for the complex interactions of radiation with matter.

Dosimetry plays a pivotal role in the development and implementation of radiation protection guidelines. Dosimetric data is used to design shielding, establish safety protocols, and monitor environmental and personnel exposure to comply with regulatory standards. It's also instrumental in the quality assurance processes of radiotherapy equipment, verifying that machines operate within specified parameters.

Current dosimetry methods employed by medical physicists include but are not limited to ionization chambers, thermoluminescent dosimeters (TLDs), optically stimulated luminescence dosimeters (OSLDs), semiconductor detectors, and neutron detectors. Each of these methods offers different advantages in terms of sensitivity, energy dependence, and the range of doses they can accurately measure. Selection of a dosimetry system depends on the specific application, such as in vivo dosimetry, which provides real-time dose measurements during patient treatment, or passive dosimetry, used for monitoring occupational exposure over time.

Common clinical dosimeters are described and compared in Table 1.

TABLE 1

Common Clinical Dosimeters

img-z2-2_102.gif

Dosimetric measurements form the basis for radiation dose-response models, which predict the risk associated with various levels of exposure. This modeling is crucial for setting safety standards and determining the probability of radiation-induced effects in both diagnostic and therapeutic contexts.

The significance of dosimetry also extends to research and development, guiding the creation of new radiation-based medical technologies and therapies. It provides the empirical data needed to validate theoretical models of radiation interaction with biological tissues and to refine computational dosimetry models, such as Monte Carlo simulations.

This literature review aims to encapsulate the evolution of dosimetry as a discipline, highlight its significance in the advancement of radiation research, and identify the current trends and methodologies in the field. The review will synthesize key historical milestones, analyze contemporary practices, and project future directions in dosimetry. The overarching goal is to underline the indispensability of dosimetry in enhancing the quality of radiation research and its applications.

Dosimetry in the Early Years of Radiation Research

Starting with the first paper (2) in the first issue of the new journal Radiation Research in 1954, an appreciation of the need for dosimetry of ionizing radiations was firmly established in this developing field. Of the additional 24 papers in the first three issues, eight dealt with the measurement of radiations from a physical standpoint (e.g., neutrons vs. photons) or involved aspects of radiation chemistry (e.g., yields of free radicals in water) that were integral to the chemical dosimetry methods available at the time.

In his “Introductory Remarks on the Dosimetry of Ionizing Radiations”, based on a talk in a symposium on Physical Measurements in Radiobiology at the 1953 meeting of the Radiation Research Society, Ugo Fano presented the current state of the art regarding the measurement of physical effects of exposure to radiation. He considered the “job as done” if two goals could be achieved.

The first requirement for acceptable dosimetry was a statement that “This material has received from ionizing radiation a dose of x ergs (unit for energy)/gram at the point of interest”. This represented the concept that had been developing in preceding years that a measurement simply of the number of ionizations in a volume would not suffice. That is, there was a need to move beyond the roentgen (R), which had been used since 1928 as a unit of exposure and defined as the amount of radiation that would result in one electrostatic unit of charge in a 0.001293 g of air (at 0°C and one atmospheric pressure). To accommodate radiation absorption in a realistic biological material rather than air, water was a convenient standard. And defining a dose as the amount of energy that would be deposited in a set volume of water by one roentgen of radiation was likewise convenient. Given water's unit density, this allowed ready expression of energy absorption in units of erg/g, with typical values for the radiations being studied in the neighborhood of 80–90 erg/g. To make it even more convenient, the unit of the rad (radiation absorbed dose) was recommended by the ICRU in 1953 with its definition of 100 erg of energy absorbed per gram. As this system used cgs units [cm (centimeter), g (gram), s (second)], and with the movement to the mks system [lengths in meters, mass in kilograms, time in seconds] in 1975, a unit equivalent to one joule-kilogram of tissue became the SI unit for absorbed dose, enabling a universal standard that could be applied in both diagnostic imaging and radiation therapy. In recognition of Hal Gray's seminal work in the area of dosimetry the new unit was named the gray (Gy). Since 1 Gy = 100 rad, conversion of units (and prior results) was an easy matter.

In 1953, a measurement of energy deposited in a mass that could be derived from measurements of ionizations in air would suffice regardless of the composition of that mass as long as the mass was homogeneous at the level of ionizations provided X rays are the radiation source. For X rays typically in the range of a few hundreds of keV, two measurement approaches were available, both depending on the assessment of ionizations produced in a cavity. As detailed in the third paper in the first issue, Leonidas Marinelli provided background rationale, as well as the conditions for use of open-air ionization chambers and the thimble air-ionization chamber (3). The former was most reliable for X-ray energies below 200 keV (commonly in use at the time) while the latter would be preferred at higher energies. Marinelli not only gave detailed instructions for the optimum geometry of source, absorber, and dosimeter that should be employed by the radiobiologist, but also the accommodations that could be made to conform to non-optimum experimental requirements. Although the dose deposited by exposure to 1 R would vary slightly depending on incident photon energy, the design of an open-air chamber or the composition of the thimble wall, and also the density of the biological mass under consideration, the factors to describe these fluctuations were well known both theoretically as well as from empirical observation. By paying attention to the numerous requirements and cautions about the performance of the measurements, a radiobiologist could be reasonably confident in obtaining an accurate and precise value for energy absorption in erg/g.

The situation was not so clear cut if radiations other than X rays were employed. Especially in the case of neutrons which were a hot topic of investigation in the 1950's, Fano added a second requirement: “and this energy has been dissipated along the tracks of charged particles with a linear energy transfer (LET) of y Mev/(gm/cm2)”. That is, not only the amount of energy absorbed but also where it was deposited at the microscopic level. This concept was taken up by Burton Moyer in the second paper of the issue where he discussed “Neutron physics of concern to the biologist” (4). In contrast to the relatively simple considerations for absorption of energy from monochromatic X rays in a homogeneous medium, neutrons were far more complex in the energy deposition. And correspondingly, many more factors needed to be included in the increasingly complex set of measurements, with a broader range of instruments and techniques, that were required to provide reliable determinations of LET. Unfortunately, at the time, the required measurements of neutron flux and spectrum were characterized as “physics research projects rather than unambiguous observations to be made with conventional equipment”.

Given the above state of the art for determination of LET, it is not surprising that Fano commented regarding his two requirements that “This goal of dosimetry has hardly ever been attained yet”. But he followed this pessimistic assessment with “In the majority of radiobiological research problems one probably need not even try to approach this goal.” Probably a valid conclusion given that most radiobiological researchers were employing X rays where dosimetry could be adequately accomplished with standardized and readily available ionization chamber instruments. And where neutrons were being studied, it was in major physics research centers where physicists with specific expertise in neutron physics and dosimetry were likely to be available. None-the-less, Fano did seek to educate the radiobiologist as to how low an accuracy for dosimetry would be sufficient, as well as the experimental situations where it would be reasonable to not expend greater effort on more precise dosimetry. But even given this latitude, it was accepted that reliable, reproducible, and adequately accurate dosimetry was a foundational requirement for high quality radiobiological research.

HISTORICAL PERSPECTIVE OF DOSIMETRY

In the sphere of radiation research, dosimetry stands as a foundational pillar, ensuring the efficacy and safety of therapeutic and diagnostic applications. It's an area that has evolved significantly since the 1950s, adapting to the technological advancements and the ever-increasing complexity of radiation-based treatments. This expansion necessitates a deeper exploration of the historical perspective, key figures, and pivotal moments that have shaped dosimetry into an indispensable tool in radiation research.

Early Methods of Radiation Measurement

The history of dosimetry traces back to the late 19th century following the discovery of X-rays by Wilhelm Conrad Röntgen. Initially, dosimetry was rudimentary, relying on simple devices such as electroscope-based ionization chambers which provided a basic measure of exposure through ionization of air (5). As ionizing radiation began to be employed in medical treatments, the need for more sophisticated measurement techniques became clear.

Advent of Dosimetry Post X-rays Discovery

The medical application of X rays spurred the development of dosimetric methods. Skin erythema was an early biological indicator of radiation exposure; however, it was a crude and unreliable method leading to the use of dosimeters that could provide quantitative measurements. The adoption of film badges in the 1920s allowed for more accurate monitoring of occupational exposure over time, although they too had limitations in terms of dose range and energy dependence (1).

Impact of World Wars on Dosimetry Advancements

The World Wars catalyzed rapid advancements in dosimetry, driven by the increased use of radiation in medicine and the need for protective measures against the effects of nuclear weaponry. Research during this period led to the development of more advanced dosimeters capable of measuring high radiation doses and various types of radiation, including neutrons, which were a focus due to their production in nuclear fission (6).

The Roentgen Era and Standardization Efforts

The Roentgen era was marked by the adoption of the roentgen unit as the standard measure of X-ray and gamma-ray exposure. This unit was insufficient for expressing the energy deposited in biological tissue, leading to the development of new units such as the rad and subsequently the gray, which took into account the energy absorbed per unit mass, providing a more direct correlation with biological effects (7).

Contributions of Key Figures in Dosimetry

The contributions of Marinelli and Fano to the field of dosimetry have been pivotal, establishing foundational principles and methods that underpin modern radiation measurement and safety protocols. Their work in the mid-20th century provided essential guidance in understanding the complex interaction of radiation with matter and set standards that are still referenced today.

Marinelli's seminal work on cavity chamber theory offered a new perspective on dose measurement, which became crucial for the calibration of radiation therapy equipment. His insights into the energy-dependent responses of dosimeters laid the groundwork for more accurate and reliable dose measurements. The cavity theory helps account for the variations in energy deposition in different materials, guiding the dosimetry used in both diagnostic radiology and therapeutic radiology (8).

Ugo Fano's contributions were equally significant, particularly in his development of the concept of LET, which describes the energy released by ionizing radiation as it travels through matter. Fano's work on radiation quality, quantifying the ionization density along the tracks of charged particles, was crucial for understanding the biological effects of radiation, allowing for a better assessment of radiation risk and the effectiveness of radioprotective measures (9).

Their collective efforts have not only influenced the practice of dosimetry in clinical settings but also propelled research in radiation protection and biophysical modeling. Fano's involvement in setting up dosimetric protocols ensured that dosimetry could meet the scientific standards required for reproducibility and accuracy, which are essential for translating laboratory research into clinical practice (10).

The methodologies they developed and refined, including the calculation of dose distributions and dosimetry parameters using Monte Carlo simulations, are still relevant. These simulations allow for the precise prediction of dose distributions around various types of brachytherapy sources, critical for the tailored treatment plans in modern radiotherapy (11).

Together, Marinelli and Fano's work established critical dosimetric parameters such as radial dose functions and anisotropy functions, which are integral to brachytherapy treatment planning today. The detailed dose-rate distributions around high dose-rate sources that they characterized provide the accuracy required in treatments that target very specific regions, minimizing damage to surrounding healthy tissue (12).

Their legacy in the field of dosimetry continues to influence contemporary practices, ensuring that dosimetry remains a precise and accurate science. The development of dosimetric principles by Marinelli and Fano has been crucial not only in enhancing the safety and efficacy of radiation therapies but also in fostering further innovation in the field. Their work exemplifies the rigorous scientific inquiry and application that is the hallmark of high-quality radiation research.

Technological Evolution and Computational Dosimetry

The 1950s were characterized by significant advancements in dosimetry technology. Innovations included the development of portable and more sensitive ionization chambers, improvements in photographic dosimetry, and the introduction TLDs, which expanded the capability to measure and record dose information over time (13).

As computer technology advanced, so did computational dosimetry, employing complex algorithms to simulate radiation transport and interaction in the human body, further refining dose calculation and planning in radiation therapy (14). The adoption of Monte Carlo simulations represented a significant leap forward in dosimetry, allowing for highly accurate three-dimensional dose distributions that could be tailored to the patient's anatomy (15).

Contemporary Dosimetry and Personalized Medicine

Today, dosimetry is integral to the practice of personalized medicine in radiation therapy, where dose distribution can be precisely modeled and delivered, maximizing the therapeutic ratio. Innovations such as real-time in vivo dosimetry and adaptive radiation therapy rely heavily on accurate dosimetric data to adjust treatment plans dynamically based on the actual dose delivered during each therapy session (16).

EVOLUTION OF ABSOLUTE DOSIMETRY: TECHNOLOGICAL ADVANCEMENTS

The evolution of dosimetry from the 1950s to 2024 reflects the profound impact of technological advancements on the field of radiation therapy. At the heart of this evolution lies the concept of absolute dosimetry, the precision quantification of the dose delivered by radiation sources, which is indispensable for ensuring accurate and safe delivery of radiation therapy in medical settings. Absolute dosimetry has traditionally relied on three measurement techniques: calorimetry, Fricke dosimetry, and ionization chamber measurement.

Ion Chamber: The Pinnacle of Absolute Dosimetry

The ionization chamber, a cornerstone in the arsenal of dosimetric tools, operates on the principle of charge collection via radiation induced gas ionization. It facilitates the direct determination of the absolute dose by collecting charge carriers produced within its sensitive volume—thus, serving as a bridge between the radiation field and the quantifiable dose metrics.

Cavity Theory

Cavity theory, fundamental to ionization chamber dosimetry, has evolved significantly from its inception by L.H. Gray and W.H. Bragg, known as the Bragg-Gray cavity theory, to more contemporary adaptations such as the Spencer-Attix cavity theory. The Bragg-Gray theory posits that a small gas-filled cavity within an irradiated medium can be used to infer the absorbed dose from the charge produced by secondary charged particles (protons, α-particles, and recoil nuclei) crossing the cavity without disturbing the charged particle fluence(17). This method relies on the cavity being sufficiently small relative to the range of the charged particles to avoid perturbation of the medium's electron field.

Further refining this approach, Spencer and Attix introduced modifications that account for scenarios where the cavity size is comparable to or larger than the range of the secondary particles, addressing the changes in energy fluence over the cavity volume. Their theory adjusts for the differences in the charge particle spectrum across the cavity and emphasizes the stopping-power ratio, which is crucial for calculating the dose delivered to the gas relative to the surrounding medium (18). These modifications are pivotal in scenarios where charged particle equilibrium does not apply, such as near interfaces between different media or at the edge of a beam.

Monte Carlo simulations have played a crucial role in evaluating and confirming the stopping-power ratios, a fundamental component in ion chamber dosimetry (19, 20). These simulations have shown a high degree of agreement (to about 0.1%) in stopping-power ratios, provided the same data sets are used, underscoring the precision required in dosimetric calculations. Despite this precision, historical discrepancies due to varied electron stopping power-data sets have caused significant confusion, a problem now largely resolved by the adoption of standardized data from ICRU Report 37 based on Berger and Seltzer's work at NIST (21).

However, practical challenges remain in the field due to the non-ideal nature of real ion chambers. These chambers often require complex calibration protocols to accurately measure dose. The AAPM Task Groups 21 and 51 have addressed these challenges through standardized calibration protocols that consider additional factors such as beam quality, temperature and pressure corrections, and dose gradient effects. For cylindrical ion chambers, in-phantom calibration factors are applicable with the central axis of the chamber at the measurement point. For plane-parallel chambers, the measurement point is considered to be the inside face of the front window, ensuring accuracy across varying experimental setups and conditions (22, 23).

Calibration Standardization

In the early days of radiation research, calibration protocols were non-existent for radiation-producing devices. It wasn't until the SCRAD2 (1971) and ICRU #213 (1972) that protocols for photon and electron beams were established, later refined by the AAPM Task Group 21 in 1983 (TG-21) and simplified by TG-51 in 1999. These protocols, with their international counterparts such as the IAEA TRS 398, set the stage for a unified approach to dosimetry across borders (24).

Lower Energy X-Ray Systems

The use of keV delivery systems in radiation biology and preclinical radiobiology necessitates an intricate understanding of X-ray irradiators and the dosimetric protocols that govern their operation. X-ray irradiators, typically operating between 40–350 kVp, are fundamental for animal irradiation studies due to their ability to achieve deep tissue penetration and maintain dose uniformity across small subjects like nude mice (25). The complexity of these systems is highlighted by their reliance on multiple filters, such as aluminum and copper, and variable source-to-surface distances that can significantly affect beam quality and dosimetric accuracy.

The dosimetric evaluation of these systems is guided by the American Association of Physicists in Medicine (AAPM) protocol TG-61, which provides a framework for the calibration and measurement of keV X ray systems (25). This protocol involves the use of an ionization chamber to measure air kerma, Kair, which is then converted to dose to water using correction factors to account for the energy and quality of the X ray beam. The half-value layer (HVL), measured with high-purity copper attenuators, serves as a proxy for assessing the beam quality, determining air-kerma calibration factors, and adjusting mass energy absorption coefficients for dose conversion.

The application of TG-61, primarily designed for human radiotherapy settings, presents several challenges when adapted to preclinical irradiation scenarios. The protocol's calibration methods, either in-air or in-phantom, are skewed towards conditions more typical of human treatment, such as narrowly collimated beams and predefined field sizes. In contrast, preclinical irradiation often involves larger, uncollimated fields to accommodate multiple animals, leading to significant discrepancies in dose distribution and increased field inhomogeneity (26). This results in animals at the periphery of the field receiving doses up to 20% lower than those at the center, complicating the interpretation of biological effects and the reproducibility of experiments.

The standard calibration conditions stipulated by TG-61 do not account for the backscatter factor's sharp increase with field size or the scatter photon and electron contamination from the X-ray cabinet, which becomes more pronounced with larger fields. These factors contribute to a systematic underestimation of the dose delivered to small animals, as evidenced by studies comparing TG-61 derived calibrations with MOSFET-based dosimetry, which have shown underestimations in the range of 3–7% (27).

Given these substantial gaps between TG-61 guidelines and the actual conditions in preclinical irradiation experiments, there is a need for refreshed protocols and additional in vivo dosimetry, aligning more closely to the specific requirements of small animal studies. These adaptations could include the development of correction coefficients for wide-field irradiation and adjustments to calibration practices that reflect the non-ideal, scattered environments typical of preclinical labs. Such enhancements would significantly improve the accuracy of dose measurements and ensure that experimental outcomes are both reliable and reproducible across different research settings.

The Quintessential Ion Chamber

The categorization of ion chambers into three distinct types—Farmer, micro, and parallel plate—addresses the varied demands of clinical and research dosimetry.

Farmer Chamber

The Farmer chamber, pioneered by Aird and Farmer in 1972 (28), is a thimble chamber revered for its robust construction and versatility across different radiation qualities. Its design, optimized for the measurement of high-energy photon and electron beams, has become a staple in radiation therapy clinics worldwide. The Farmer chamber's utility lies in its geometric design and material composition, which ensure energy independence and a nearly water-equivalent response—qualities essential for accurate dosimetry.

Micro Chamber

Micro chambers epitomize the finesse required in dosimetry for small radiation fields. They are particularly valuable in the realms of stereotactic radiosurgery and brachytherapy, where their small sensitive volume allows for high-resolution measurements in regions with steep dose gradients. The ability of micro chambers to resolve intricate dose distributions ensures the preservation of tissue-sparing techniques while optimizing the therapeutic dose (29).

Parallel Plate Chamber

Preferred for their enhanced spatial resolution, especially below 10 MeV, parallel plate chambers offer an unparalleled advantage in measuring dose distribution for electron beams. Their narrow plate separation ensures a minimal variation in beam intensity across the sensitive volume, translating to precise dosimetry in clinical electron beams (30).

Spherical Chambers

In the domain of radiation dosimetry, spherical ionization chambers have established themselves as the paragon for primary standard dosimetry, particularly for gamma-ray beams from isotopes like Cobalt-60 (60Co) and Cesium-137 (137Cs) often used in industrial irradiator systems. Their robust design principles and meticulously calibrated volumes allow for an exceptionally uniform response to ionizing radiation, a vital attribute when establishing a primary dosimetry standard. Spherical chambers are meticulously crafted to uphold the integrity of air kerma measurements. Their geometric symmetry ensures a uniform collection of ionization charge, free from the angular dependency that might afflict other chamber designs, thus offering a homogenous response irrespective of the direction of incident radiation. Consequently, these chambers have become the instruments of choice for creating a dosimetry baseline in clinical settings, against which other measurement tools can be calibrated and validated.

The significant variability in chamber volumes, ranging from a few cubic centimeters to several thousand, caters to a diverse spectrum of measurement contexts, from precise benchtop calibrations to large-scale environmental dosimetry. Notably, chambers with larger volumes, such as those reaching 15,700 cm3, offer a greater mass of air for ionization, thus providing enhanced sensitivity and lower noise levels in measurement, critical for establishing a primary standard in low-level radiation fields (31). The design of these chambers often incorporates features to counter environmental factors, such as temperature, pressure, and humidity, which might otherwise influence the accuracy of the dose measurement, thus ensuring reliability in a variety of operational conditions (32).

The standard set by spherical ionization chambers is underpinned by rigorous metrological practices. These chambers are calibrated against known physical quantities, with their measurements traceable to standards maintained by organizations like the National Institute of Standards and Technology (NIST) in the United States. The confidence in dose delivery and safety protocols that these chambers provide cannot be overstated, as they ensure that patients receive treatment doses that are both precise and safe, with minimal uncertainty.

Fricke Dosimetry

Utilizing ferrous sulfate solutions to measure radiation doses has been integral in the field of radiation therapy and safety. This method, based on the oxidation of ferrous ions (Fe2+) to ferric ions (Fe3+) upon exposure to ionizing radiation, offers a quantifiable approach to assessing absorbed radiation doses. Traditionally relevant in the administration of high dose rate brachytherapy with 192Ir sources, Fricke dosimetry has facilitated advancements in cancer treatment modalities. Introduced by Hugo Fricke and S. Morse in 1927, this dosimetric technique has evolved significantly (33).

The method has undergone numerous refinements, incorporating advancements such as gel dosimeters in the 1980s to measure spatial radiation dose distributions accurately. Despite its benefits, challenges such as ion diffusion within gel dosimeters have prompted further innovations. Studies, such as those by P. Rosado et al. demonstrate Fricke dosimetry's accuracy in determining the absorbed dose to water for medium-energy X-ray beams, reinforcing its utility across various radiation types (34). These advancements underscore the method's adaptability and its critical role in the ongoing enhancement of radiation research techniques.

Calorimetry

Calorimetry, a pivotal method in dosimetry, offers precise measurement of absorbed doses by relying on the thermometric detection of heat produced in a medium by ionizing radiation. The fundamental principle underpinning calorimetry is the direct relationship between the heat produced and the energy absorbed from radiation, making it indispensable for establishing absorbed dose standards. The evolution and refinement of calorimetric techniques have significantly contributed to advancements in radiation dosimetry, particularly in the calibration of dosimeters and validation of dose delivery in radiotherapy. While these systems are valuable as first principal dosimeters, they can be cumbersome in comparison to ionization chamber-based alternatives. While unencumbered by wires and electrometer, the additional complexity of calorimeters prevents them from being a mainstay within medical environments.

Water Calorimetry

Water calorimetry serves as the primary standard for absorbed dose measurements, capitalizing on water's equivalence to human tissue in terms of radiation interaction properties. The National Institute of Standards and Technology employs a sealed water calorimeter, which uses a sealed glass vessel encompassed by water within a phantom. This design minimizes impurities and gas exchange, ensuring precise measurements (35). The sealed water calorimeter's advantage lies in its ability to provide a highly controlled environment, facilitating accurate determination of absorbed doses by measuring temperature changes within a sealed water volume, thus playing a crucial role in the calibration of ionization chambers under Task Group 51 (TG-51) protocols (24).

Graphite Calorimetry

Graphite calorimeters measure the temperature change of a graphite core, translating these measurements into absorbed dose to water through analytical conversions. This technique, tracing back to foundational work by Domen and Lamperti (36), allows for the calibration of ionization chambers by measuring absorbed doses in graphite and then converting these measurements to equivalent doses in water. The historical significance and continued use of graphite calorimeters in primary standard laboratories underscore their reliability and precision in absorbed dose measurements, despite the transition towards more direct water-based methods in some contexts.

Innovations and Developments

Advancements in calorimetry aim to address challenges in measuring doses under non-reference conditions, such as in intensity-modulated radiation therapy (IMRT) treatments. Innovations include the development of calorimeters capable of direct absorbed dose to water determination in small and complex radiation fields, thereby minimizing detector perturbations and thermal effects (37). Peer collaborations, such as the commissioning of a sealed water calorimeter between NRC and METAS, highlight the international efforts to enhance dosimetric accuracy using 60Co γ rays (38).

The adaptation of calorimetry to contemporary clinical requirements is evident in the design of a prototype low-cost secondary standard calorimeter by Bass et al. (39), aimed at reference dosimetry with ultra-high pulse dose rates. This innovation represents a shift towards developing accessible, clinically relevant calorimetric solutions applicable in FLASH radiotherapy.

Calorimetry's evolution from graphite to water-based systems, coupled with ongoing innovations, underscores its fundamental role in radiation dosimetry. By providing a direct measurement of absorbed dose, calorimetry enhances the accuracy of dosimetric standards and plays a crucial role in the calibration of dosimeters.

MODERN RELATIVE DOSIMETRY: TECHNIQUES AND CLINICAL SYSTEMS

Relative dosimetry plays a pivotal role in the field of radiation therapy. This discipline involves the comparison of doses measured in different regions of a radiation field to a reference dose, facilitating the precise calibration of dose distributions essential for effective cancer treatment. The significance of relative dosimetry extends beyond clinical applications into the domains of radiation biology and scientific research, where understanding the biological effects of radiation necessitates exact dosimetry.

In radiation biology, the effects of ionizing radiation on biological systems are complex and vary significantly with dose and dose rate. Precise dosimetry is crucial for correlating specific doses with biological outcomes, such as DNA damage, cellular repair mechanisms, and the stochastic effects that may lead to cancer or genetic mutations (40, 41). The integrity of scientific research in this field hinges on the reliability of dosimetric data to ensure that the conclusions drawn from experimental studies are accurate and reproducible.

The advancement of radiation therapy techniques, including brachytherapy, stereotactic radiosurgery (SRS), and IMRT, necessitates sophisticated dosimetric methods to achieve conformal dose distributions with high precision. Technologies such as MOSFETs (42), TLDs, and radiochromic films have been instrumental in advancing relative dosimetry for both clinical and research applications, enabling the accurate measurement of dose distributions even in complex geometries and small fields (43, 44).

This overview underscores the importance of relative dosimetry in the realms of radiation therapy, biology, and scientific research. The accurate determination of dose distributions not only ensures the efficacy and safety of radiation therapy treatments but also underpins our understanding of the fundamental interactions between ionizing radiation and biological tissues.

Diode Dosimeters

The evolution of diode dosimeters in radiation therapy traces its roots back to 1963 when Jones first proposed the use of single diodes for photon beams dosimetry, signifying a pivotal shift in the landscape of dosimetric practices (45). Subsequent years witnessed a burgeoning interest in their application, notably by Grusell and Rikner in the 1980s, who extensively explored the advantages and limitations of diode dosimeters, paving the way for modern dosimetry. Semiconductor diodes, due to their small size and high sensitivity, offer precise dose measurements, making them invaluable in the field of radiation therapy (46, 47).

Diode dosimeters have found widespread application in photon and electron beam dosimetry, offering advantages over traditional ionization chambers by eliminating the need for depth-dependent corrections and facilitating in vivo dosimetry (48, 49). Diode dosimeters have been extensively applied in measuring depth-dose and lateral-profile distributions in electron beams, where they offer a robust alternative to traditional ionization chambers without necessitating depth-dependent corrections (48). Their high spatial resolution and minimal volume-averaging effects make them invaluable in the precise measurement of depth-dose and lateral-profile distributions in complex radiotherapy treatments (50). The utilization in in vivo dosimetry, particularly for treatments involving the skin in electron and photon beams, underscores their critical role in ensuring accurate dose delivery to patients (49, 51).

Diode dosimeters offer several advantages, including high sensitivity, minimal perturbation to the radiation field, and the ability to measure small field sizes accurately. Their use is not without limitations; temperature dependence, radiation damage, and energy dependence pose challenges that necessitate regular calibration and the application of correction factors (52, 53). Their response can be affected by various factors such as temperature, beam quality, and angle of incidence, necessitating regular calibration and application of correction factors to maintain accuracy (46, 54). The development of diode arrays for high-resolution dosimetry in complex radiotherapy treatments, such as IMRT and volumetric modulated arc therapy (VMAT), represents a significant advancement in the field, providing detailed dosimetric information that enhances treatment planning and verification (55).

Advancements in semiconductor technology have led to the development of novel diode dosimeters with improved radiation hardness and reduced temperature sensitivity (56). Innovations such as the angular independent silicon detector for dosimetry in external beam radiotherapy represent significant milestones in the quest for more reliable and accurate dosimetry solutions. (57).

The future of diode dosimetry lies in the exploration of new semiconductor materials and the integration of micro-fabrication technologies to enhance dosimeter performance and application. Semiconductor diode dosimeters have played a pivotal role in the evolution of radiation dosimetry, offering high precision and reliability essential for modern radiation therapy techniques. Their ongoing development and refinement continue to address existing challenges, promising further enhancements in dosimetric accuracy.

MOSFET

The significance of metal-oxide-semiconductor field-effect transistor (MOSFET) dosimetry in radiation therapy and brachytherapy underscores a pivotal advancement in the precise delivery and monitoring of therapeutic radiation doses. Initiated in 1970, the proposition of MOSFETs for dosimetry marked the beginning of a transformative journey, culminating in their widespread application across various modalities (5860).

These devices have been effectively utilized in brachytherapy, exhibiting remarkable real-time quality assurance capabilities, as illustrated by Carrara et al. who highlighted their pivotal role in enhancing the safety and efficacy of brachytherapy procedures (42).

The advent of novel detectors, as discussed by Rosenfeld, has further revolutionized silicon-based micro-dosimetry, extending the utility of MOSFETs in capturing minuscule dose variations with high precision (43). This innovation is pivotal for advancing radiation therapy techniques, where accurate dose measurement is paramount. The comprehensive study by Bradley et al. on solid-state micro-dosimetry underscores the evolution of MOSFET technology, providing a foundation for future advancements in the field 2001 (44).

Despite the advancements, the application of MOSFETs in small-field dosimetry, particularly in megavoltage photon beams, faces challenges due to measurement precision limitations (61). These limitations necessitate ongoing research and development to refine MOSFET technology for broader applicability in radiation therapy.

In brachytherapy, specifically intraoperative radiotherapy (IORT) delivered with electron or low-energy photon beams, MOSFETs have shown promising results. Consorti et al. and Ciocca et al. have demonstrated the feasibility and effectiveness of MOSFETs in real-time in vivo dosimetry during IORT, marking a significant step towards personalized and precise radiation dose delivery (62, 63).

The application of MOSFETs extends beyond the clinical setting, into research and development, as evidenced by the work of Kron, which explores the thermoluminescence dosimetry and its applications in medicine, providing a comprehensive overview of the potential and limitations of MOSFETs and other dosimetric tools in advancing radiation therapy (64).

MOSFET dosimetry represents a cornerstone in the evolution of radiation therapy and brachytherapy, offering unparalleled precision in dose measurement. Despite existing challenges, ongoing research and technological enhancements promise to expand the scope of MOSFET applications.

Film Dosimetry

Film dosimetry has long served as a cornerstone in the measurement of dose distribution, providing crucial data for both clinical applications and research in radiation biology. This section delves into the evolution from radiographic to radiochromic film, highlighting the technological advancements, applications, and ongoing developments in this field.

Radiographic Film

Radiographic film, due to its high spatial resolution and ease of use, once held a pivotal role in radiation therapy dosimetry. It was instrumental in the verification of dose distributions, particularly in complex radiotherapy techniques such as IMRT and SRS (65). The process of using radiographic film in dosimetry involves the film being exposed to radiation, after which it undergoes chemical development to visualize the dose distribution. This method has been vital for ensuring accurate dose delivery to patients, facilitating the optimization of treatment plans to maximize tumor control while minimizing harm to healthy tissues (64, 66).

Despite its significance, the use of radiographic film in dosimetry has seen a decline. This is attributed to several factors, including the labor-intensive and error-prone nature of film development, the need for extensive calibration, and the influence of physical factors such as temperature and humidity on film response. The advent of digital technologies offering real-time data acquisition and superior dose measurement capabilities has further contributed to its diminished role in contemporary dosimetry (67, 68).

Radiochromic Film

The introduction of radiochromic film marked a significant technological breakthrough in film dosimetry. Unlike radiographic film, radiochromic film does not require chemical development; its response to radiation is immediate and permanently visible, providing a direct measure of absorbed dose. This feature, coupled with its tissue-equivalent response, makes it particularly advantageous for high-precision dosimetry in complex treatment modalities (69, 70).

Radiochromic films, such as GafchromicTM EBT series films, have found widespread application in the measurement of dose distributions with high spatial resolution. They are especially useful in areas where precise dose measurement and verification are critical, including IMRT, VMAT, and SRS (71, 72). The benefits of radiochromic film over radiographic film and other dosimeters include independence from dose rate, energy independence over a broad range, and minimal angular dependence. These characteristics, along with the absence of a developing process, make radiochromic film a more robust and user-friendly option for dosimetry (68, 72).

Radiochromic films, particularly those developed in recent years, have transformed the landscape of radiation dosimetry, offering a compelling alternative to traditional radiographic films for a wide array of applications. The hallmark of radiochromic films lies in their self-developing nature, eliminating the need for chemical processing and thus providing immediate, high-resolution spatial dosimetry data post-irradiation. This advantage is particularly pronounced in the context of high dose rate (HDR) brachytherapy, where precision and accuracy in dose delivery are paramount (73). The near tissue-equivalence and independence from chemical developers make them highly suitable for complex dosimetric verifications, such as those involved in stereotactic radiosurgery and stereotactic radiotherapy (74).

The advancements in radiochromic film technology, specifically the transition from EBT2 and EBT3 to EBT4 and EBT-XD films, have been driven by the need for higher sensitivity and more robust dose response characteristics. These improvements facilitate more accurate dosimetry in both photon and proton therapy, where dose gradients are steep and dose delivery is highly localized (75, 76). The introduction of multichannel dosimetry techniques has significantly enhanced the utility of radiochromic films. By exploiting the differential dose responses across the color channels of a scanner, multichannel dosimetry provides a method to correct for scanner nonuniformities and film inhomogeneities, thus improving the accuracy of dose measurements (77). This technique has shown considerable promise in patient-specific quality assurance (QA), enabling more precise verification of complex treatment plans (78, 79).

Ongoing developments in radiochromic film technology continue to refine its application in clinical settings. Innovations such as enhanced sensitivity, improved spatial resolution, and the introduction of multichannel analysis techniques are expanding its utility in dosimetry, providing clinicians with more accurate and detailed information for treatment planning and verification (72, 80). As this technology evolves, its role in ensuring precise and safe radiation therapy delivery is set to increase, underscoring its importance in the continued advancement of radiation therapy practices.

TLD/OSLD

Thermoluminescence dosimetry (TLD) and optically stimulated luminescence dosimetry (OSLD) have a rich history and continue to play a pivotal role in radiation dosimetry, especially within the field of radiation biology and medical applications. The foundation for TLD was laid in the early 1950s, with notable contributions from pioneers such as Farrington Daniels, who demonstrated the first medical application of TLD. Daniels et al. described an innovative approach where TLD crystals were ingested by patients to measure internal radiation dose (81).

This groundbreaking work laid the groundwork for the extensive utilization of TLD in medicine, particularly in dosimetry for diagnostic and therapeutic purposes. Subsequently, John Cameron furthered the application of TLDs in radiation therapy, establishing lithium fluoride (LiF:Mg, Ti), patented as TLD-100 by Harshaw Chemicals, as a standard material for TLD, thereby significantly advancing medical dosimetry (66, 82).

In parallel, the development and application of OSLD have seen significant advancements. The phenomenon of OSL has been studied since the 19th century, but its application in dosimetry became feasible with the development of stable and sensitive materials like aluminum oxide (Al2O3:C) (64). This material's ability to store energy at room temperature and its sensitivity to both light and heat have made OSLDs particularly useful for personal dosimetry and environmental monitoring.

The evolution of TLD and OSLD materials and their application in radiation dosimetry have been extensively documented and refined over the years. These dosimeters' versatility allows for their application in a wide range of settings, from high-energy physics to clinical radiation therapy and diagnostic radiology (83, 84).

Looking towards the future, the development of new TLD and OSLD materials with enhanced sensitivity and specificity is ongoing. These advancements aim to meet the evolving needs of radiation dosimetry, including higher precision in dose measurement and the ability to accurately measure complex dose distributions in emerging radiation therapy techniques. The integration of OSLD with digital technologies presents opportunities for real-time dosimetry, contributing to safer and more effective radiation therapy treatments.

STATE-OF-THE-ART DOSIMETRY TECHNIQUES

The landscape of dosimetry equipment has significantly evolved to incorporate a variety of sophisticated technologies, each offering unique advantages in the measurement of radiation doses, crucial for both diagnostic radiology and radiotherapy. Among these, synthetic diamond detectors stand out for their exceptional dosimetric properties, including high sensitivity, excellent spatial resolution, and negligible temperature dependence. These characteristics make them ideal for precision dosimetry in complex radiation fields, such as those encountered in IMRT and SRS. Additionally, radiochromic films and 3D Gels offer distinctive advantages in visualizing and measuring the three-dimensional dose distributions, providing invaluable insights into the complex dose gradients and verifying treatment plans with high spatial accuracy.

Further advancements in dosimetry equipment include array detectors, scintillators, and Cherenkov cameras, each contributing uniquely to the field. Array detectors, composed of numerous small dosimeters arranged in a matrix, allow for high-resolution mapping of radiation fields, essential for quality assurance in radiotherapy. Scintillators, known for their ability to convert ionizing radiation into visible light, play a critical role in real-time radiation monitoring, offering rapid response times and high sensitivity. Cherenkov cameras capture the Cherenkov radiation emitted when charged particles move through a dielectric medium faster than the speed of light in that medium. This novel approach enables real-time imaging of radiation beams, offering a new avenue for verifying and adjusting radiotherapy treatments dynamically.

Natural and Synthetic Diamond Detectors

Synthetic diamond detectors represent a significant advancement in the field of dosimetry, offering a reliable and reproducible means for radiation measurement. The inception of diamond as a material for dosimetry traces back to 1948, with McKay's proposal of natural diamonds as dosimeters (85). Despite their potential, natural diamonds presented challenges including rarity, cost, and the need for bespoke characterization (86). The PTW-60003 dosimeter, leveraging a natural diamond crystal, emerged in the 1990s as a commercial solution, albeit with limitations such as the need for priming and variability in crystal impurities (87, 88).

Transitioning to synthetic diamonds mitigated these challenges. The advancement in chemical vapor deposition (CVD) and high-pressure high-temperature (HPHT) techniques facilitated the production of synthetic diamond detectors with enhanced reproducibility and affordability (89, 90). These developments underscored the evolution from natural to synthetic diamonds in dosimetry, offering a more practical and scalable solution for radiation measurement.

The dosimetric properties of synthetic diamond detectors have been extensively characterized, revealing their aptitude for clinical dosimetry. Their negligible temperature dependence and dose-rate sensitivity make them suitable for precise radiation measurements in varied clinical settings (91, 92). Studies have shown the effectiveness of synthetic single crystal diamond detectors in small field dosimetry, highlighting their high spatial resolution and minimal perturbation effects (93, 94).

The dosimetric response of synthetic diamond detectors has additionally been investigated under different radiation therapy modalities, including IMRT and SRS. The performance of these detectors in small radiation fields, characterized by high dose gradients, further demonstrates their capability to deliver accurate dose measurements essential for the optimization of treatment plans (95, 96).

3D Polymer Gel Dosimetry

Polymer gel dosimetry, a technique integral to the advancements in clinical radiotherapy, offers an unparalleled capability in the 3D mapping of dose distributions with remarkable spatial resolution. Polymer gel dosimeters, composed of radiation-sensitive chemicals that polymerize proportionally to the absorbed dose upon irradiation, represent a class of dosimeters that facilitate the direct measurement of three-dimensional dose distributions. This is a feature that conventional dosimeters, limited to one or two-dimensional measurements, cannot provide (97, 98). The structural changes induced by radiation in these dosimeters can be observed through various readout methods including magnetic resonance imaging (MRI), computed tomography (CT), optical scanning, and ultrasonography, each offering a unique insight into the dosimetric properties altered by radiation (99102).

The utility of polymer gel dosimetry extends beyond mere dose verification; it encompasses the measurement of complex 3D dose distributions, showcases radiological tissue equivalence, independence from radiation direction, and high spatial resolution. Moreover, these dosimeters can integrate the dose over a treatment's duration, providing a comprehensive view of the dosimetric landscape. Despite the presence of toxic components in certain gel formulations, their fabrication and handling remain relatively safe, as long as appropriate protective measures are taken during use (103105).

The clinical applications of polymer gel dosimeters predominantly align with external beam radiotherapy. The evolution of sophisticated radiotherapeutic techniques, such as IMRT, VMAT, and stereotactic radiosurgery, has necessitated a robust and accurate 3D dosimetry system. Among the various polymer gel dosimeters explored, the PRESAGE gel dosimeter stands out for its efficacy in capturing accurate and feasible 3D dose distributions, highlighting the critical role of gel dosimetry in modern radiotherapy (106).

Research into polymer gel dosimetry has unveiled dosimeters with varying characteristics tailored to clinical needs. The quest for the optimal gel dosimeter continues, as each variant presents a unique set of advantages and limitations. Factors such as dose accuracy, resolution, reproducibility, sensitivity, and the influence of beam energy and dose rate on dosimeter response are critical in determining the suitability of a polymer gel dosimeter for clinical application (98, 100).

Array Detectors

The meticulous verification of dose distributions in IMRT and VMAT is pivotal for the assurance of treatment integrity and patient safety. The evolution of array detectors has significantly contributed to the streamlining of patient-specific QA processes, enabling rapid and efficient dose verification. Initially, 2D array detectors, comprising diodes or vented ion chambers, offered a pragmatic solution for dose verification; however, their spatial resolution, constrained by detector spacing, was identified as a limitation, particularly for small fields or fields with steep dose gradients. This challenge has been partially mitigated by employing dual measurements with shifted device positions, effectively enhancing spatial resolution by doubling the detector count within the measurement field (107).

The advent of detector arrays designed for stereotactic applications, with reduced detector spacing down to 2.5 mm, marks a significant advancement, addressing the need for higher resolution in SRS and stereotactic body radiotherapy (SBRT) applications (108). The introduction of devices like PTW's Octavius 4D and Delta4 PHANTOM represents a paradigm shift towards capturing 3D dose distributions. The Octavius 4D employs a rotating platform to simulate the movement around the patient, leveraging depth-dependent attenuation and scatter factors for dose scaling, thereby reconstructing the 3D dose distribution from enface measurements (109). Conversely, the Delta4 PHANTOM utilizes two orthogonal detector planes, applying measurement-guided correction factors to refine the dose distribution calculated by the treatment planning system (TPS), offering a nuanced approach to dose verification (110).

SunNuclear's ArcCHECK system introduces a novel helical arrangement of detectors to maintain an optimal orientation for arc delivery segments, facilitating the reconstruction of 3D dose distributions through entrance and exit dose measurements. This methodology enables the use of diode measurements as correction factors to adjust a precalculated relative dose distribution, underscoring the innovative approaches to dose verification in arc therapy (111).

Scintillator

Scintillators have been employed as dosimeters due to their ability to convert ionizing radiation into visible light, a principle that has been utilized in radiation detection for decades (112, 113).

The deployment of scintillation fiber optic dosimeters has seen a notable upswing, attributed to their intrinsic advantages such as in vivo, real-time, and intracavitary measurements alongside high spatial resolution. These characteristics stem from their diminutive physical size and mechanical flexibility, rendering them exceptionally suited for a broad spectrum of applications in radiotherapy dosimetry including brachytherapy, IMRT, superficial therapy, stereotactic radiosurgery, proton therapy, and small-field dosimetry (114).

The operational mechanism of fiber optic dosimeters capitalizes on the radioluminescence properties of materials, where the interaction of ionizing radiation with the scintillator affixed to the fiber's tip yields a visible signal proportional to the absorbed dose. This optical signal is then channeled through the optical fiber to a detector for dose measurement (115). A prevalent issue with fiber optic dosimetry is the contamination of the signal by Cherenkov radiation, which isn't directly proportional to the dose, necessitating complex correction procedures to ensure accurate dose measurements in the scintillator (116, 117).

A particular challenge arises in proton therapy and other high-LET beams, where the non-proportionality between scintillator light output and proton dose becomes evident. This saturation effect at high-stopping powers, attributed to ionization quenching, complicates the linear relationship between scintillation signal and energy deposition (116, 118). This issue underscores the necessity for advancements in scintillation dosimetry to enhance its efficiency and reliability in high-LET beam applications.

The efficacy of scintillation dosimeters can be augmented using photodetectors with superior photon collection and efficiency. Predominantly, photomultiplier tubes (PMTs) and photodiodes have been the photodetectors of choice in scintillation dosimetry. For two-dimensional measurements, technological advancements have facilitated the use of multichannel PMTs, photodiode arrays, or sophisticated imaging systems like charge-coupled device (CCD) and complementary metal-oxide semiconductor (CMOS) cameras as effective readout systems (119, 120).

While scintillators offer a promising avenue for enhancing the precision and efficacy of radiation therapy dosimetry, addressing the challenges associated with Cherenkov radiation and non-proportionality in high-LET beams is imperative. Continuous advancements and innovations in scintillation materials and photodetector technology remain crucial for the evolution of dosimetry methods to meet the growing demands of modern radiation therapy techniques.

Cherenkov Camera

The newly commercialized Cherenkov Camera represents a novel approach in radiation dosimetry, leveraging the Cherenkov effect, which is the emission of light when charged particles move through a dielectric medium at speeds exceeding the phase velocity of light in that medium (121, 122). This phenomenon, harnessed correctly, offers a non-invasive method to monitor and verify radiation therapy beams in real-time, providing critical insights into the accuracy of treatment delivery.

Advancements have focused on overcoming the challenges associated with Cherenkov radiation's weak signal and its attenuation in tissue. For instance, Hachadorian et al. (123) utilized spatial frequency domain imaging to correct Cherenkov light attenuation in tissue, enhancing the quantification of surface dosimetry during whole breast radiation therapy. This approach enables a more accurate representation of the dose delivered to the patient, potentially improving treatment outcomes.

The relationship between Cherenkov light emission and radiation dose has been established, with studies demonstrating a direct correlation under specific conditions (124). This correlation is pivotal for the application of Cherenkov Cameras in dosimetry, as it allows for the real-time visualization and quantification of radiation dose delivery, providing a novel means for dose verification in radiotherapy.

The Cherenkov effect has been observed in various scenarios, including within the human eye during radiation treatment, offering a unique perspective on the radiation exposure experienced by astronauts (125, 126). This wide range of observations underscores the ubiquity and potential of Cherenkov radiation in medical applications.

Algorithm development has been crucial in enhancing the utility of Cherenkov imaging for radiation therapy verification. By developing algorithms for intrafraction radiotherapy beam edge verification, researchers have made significant strides in ensuring the precision of radiation therapy(122).

While the use of Cherenkov Cameras has increased within radiation therapy, the fundamental opaque nature of the human body limits the system to use cases involving shallow targets and applications at the patient's surface. Correcting for Cherenkov light attenuation in tissue using spatial frequency domain imaging represents a significant advancement in quantitative surface dosimetry. This method has shown promise in improving the accuracy of dose measurements during radiation therapy, particularly in complex treatments such as whole breast irradiation (123).

EVOLUTION OF DOSIMETRIC TECHNIQUES FOR INTERNAL EMITTERS

The systematic progression of dosimetric techniques for internal emitters has resulted in a transition from empirical estimates to rigorously defined methodologies. The inception of this transition can be attributed to the Manchester System, formulated by Meredith in 1947, which provided a structured approach for radium and radon therapeutic implants (127). This systematic dosimetry was predicated on meticulously calculated radium quantity and distribution, intending to maximize therapeutic efficacy while mitigating undue tissue exposure. The further sophistication in dosimetry was significantly influenced by Marinelli's contribution in 1942, which introduced the Marinelli formula, delineating the internal dosimetry of artificial radioisotopes (128). Marinelli's pioneering work laid the foundational understanding of dose distribution within tissues, considering the emission characteristics of beta particles from radioisotopes and their interaction within the human body. The formula incorporated the radioisotope concentration, its physical decay, and biological elimination, adhering to an exponential decay model.

The shift from empirical to systematic dosimetry marked a pivotal era, one that meticulously considered the biological and physical intricacies of radiation interactions within the body. This shift not only improved therapeutic outcomes but also fostered the development of personalized treatment plans, aligning radiation therapy with the emerging paradigms of precision medicine.

Beta and Gamma Dosimetry

The advancements in beta and gamma dosimetry have been pivotal in enabling clinicians to accurately assess the internal distribution of doses following radionuclide administration. The seminal work by Marinelli and colleagues in 1948 established the geometric factor ‘g’, which related the beta-emitting radioisotope concentration in tissue to the absorbed dose, signifying a major leap in internal dosimetry (129). This geometric factor was particularly crucial for gamma dosimetry as it considered the size and shape of the tissue mass and gamma-ray absorption, a sophisticated approach given the complex nature of gamma interactions in the human body.

Concurrently, the reciprocal theorem posited by Mayneord in 1945 provided a mathematical foundation for calculating integral doses from point sources of radiation to volumes, analogous to the doses from extended sources to a point. This theorem has since been instrumental in solving complex problems related to the distribution of gamma rays within the body (130).

These developments underscored the necessity for and complexity of considering the three-dimensional geometry of tissues in dosimetry calculations. Such geometric considerations have since been incorporated into a variety of computational models, which have become more sophisticated with the advent of computational technologies. These models are critical for estimating patient-specific doses and for the design of radiopharmaceuticals.

The evolution of dosimetry techniques from beta particles' direct ionization considerations to the intricate interactions of gamma rays within the body tissue matrices has underscored the interdisciplinary nature of radiation dosimetry, amalgamating physics, biology, and computational sciences. It laid the groundwork for further innovations in dosimetry that could accommodate increasingly complex biological systems and heterogeneous radiation distributions.

MIRD and Monte Carlo Simulations

The founding of the Medical Internal Radiation Dose (MIRD) Committee in 1965 was a defining moment in the field of radiopharmaceutical dosimetry, with a clear mission to provide accurate dosimetry for patients undergoing radionuclide therapy (131). This committee's establishment coincided with the emergence of Loevinger's influential dose equations in 1955, which provided a robust framework for internal dose calculations, particularly for beta and gamma emitters such as Iodine-131 (132).

Loevinger's dose equations were fundamental in standardizing the calculations of absorbed dose from internally administered radionuclides, considering factors such as the radionuclide's energy, its effective half-life, and the activity concentration within the tissue. These equations not only allowed for more accurate dose assessments but also facilitated a broader understanding of the dose distributions within the human body, thereby contributing significantly to the safety and effectiveness of radiopharmaceutical use.

Parallel to the theoretical advancements, the evolution of Monte Carlo techniques brought a paradigm shift in the approach to dosimetry. Brownell and colleagues' introduction of these techniques into the dosimetric calculations allowed for the probabilistic assessment of radiation transport and interaction within the body, thus accounting for the stochastic nature of radiation interactions (133). These simulations could model complex geometries and heterogeneous tissue compositions with unprecedented precision.

The Monte Carlo method, with its ability to statistically simulate the journey of photons through various tissues, provided a more accurate representation of the physical processes occurring during radiation transport. This marked a significant enhancement over previous deterministic models, enabling the consideration of various scattering and absorption events that could affect the dosimetric calculations.

The MIRD Committee's work, underpinned by Loevinger's dose equations and the advancement of Monte Carlo simulations, has been instrumental in the refinement of radiopharmaceutical dosimetry. It has provided the methodological bedrock upon which current practices in personalized dosimetry are based, supporting the targeted treatment paradigms that are the hallmark of modern nuclear medicine.

Radiopharmaceutical Phantom Models and Dose Calculations

Phantom models have been instrumental in the advancement of dosimetric techniques, allowing for refined and representative calculations of radiation dose distributions in various patient demographics and physiological states. These models serve as virtual patients, offering a standardized basis for dose calculations and contributing significantly to the personalization of radiopharmaceutical therapy.

The initial efforts in developing phantom models were marked by Fisher and Snyder's work in 1966, which focused on the distribution of dose from a gamma ray source distributed uniformly within an organ. Their model addressed the need for standardized yet adaptable dose estimation in heterogeneous anatomical structures (134). This work was foundational for the MIRD Committee, which later adopted and expanded upon these methodologies to include a range of phantom models that simulated different ages, genders, and physiological conditions.

Further developments by Warner et al. in 1975 led to the dosimetric analysis of phantom models that represented various aged male humans, providing a framework for understanding how absorbed doses vary with age and consequently with the changing geometry and composition of the human body (135). These age-specific dose calculations are crucial for pediatric radiopharmaceutical dosimetry, where the varying sizes and growth rates of organs significantly impact dose distributions.

Cloutier et al. expanded the repertoire of phantom models in 1977 by introducing a pregnant woman model, addressing the unique dosimetric challenges posed by gestational changes. This model was essential for assessing the risk and ensuring the safety of both the expectant mother and the fetus when administering radiopharmaceuticals (136). It represented a significant advancement in the field, highlighting the necessity for dose calculation models that accommodate the wide variety of patient anatomies and physiological conditions encountered in clinical practice.

The continued development of phantom models has provided vital tools for dosimetrists and clinicians, enabling more accurate and personalized dose calculations. By simulating the physical complexities of the human body, these models have facilitated a deeper understanding of dose distribution patterns, leading to safer and more effective treatment protocols for patients in diverse physiological states.

Radiological Threats and Events Dosimetry

In the shadow of the growing urbanization and technological advancements, the threat of radiological events has escalated, posing a multifaceted risk to global security and public health. The Central Intelligence Agency (CIA) warned as early as 2003 of the tangible possibility that terrorist organizations might acquire the means to produce not only nuclear weapons but also radiological dispersal devices (RDDs), commonly known as ‘dirty bombs’ (137).

In the wake of a radiological incident, emergency management protocols dictate the sorting of individuals based on their degree of exposure and need for immediate medical attention. Biodosimetry can serve as a cornerstone for this process by assessing the doses received individually, aiding in the medical management of the situation (138). The use of personal devices such as smartphones, which can be equipped with dosimeters, presents an innovative approach to monitoring individual exposure, facilitating rapid triage and treatment allocation (139).

The essence of biodosimetry lies in its ability to provide an estimation of absorbed dose based on biological indicators, such as the frequency of dicentric chromosomes, micronuclei, or gamma-H2AX foci, which correlate with radiation exposure (140, 141). These methods have been established as reliable tools for detecting individual radiation doses with commendable accuracy (140, 142). Further, the consolidation of efforts under the RENEB project indicates a move towards harmonizing biodosimetric techniques across European laboratories, aiming for a standardized and coordinated response to radiological events (143).

Each biodosimetric method has its advantages and constraints. Retrospective physical dosimetry methods like electron paramagnetic resonance (EPR) and OSL can be performed on non-biological materials and are thus less subject to biological variability and the time constraints of sample collection. EPR in particular provides a lasting record of exposure and can be applied years after the event (144, 145). Cytogenetic assays, due to their biological basis, correlate well with the clinical outcomes and can be applied regardless of the individual's prior health status or age. They are especially advantageous for assessing heterogeneous exposures and partial-body irradiations, which are common in accidental scenarios (146). However, the time requirement for analysis and the need for fresh blood samples can impede their rapid deployment of cytogenetic assays.

These methods require the availability of suitable materials for analysis and may be influenced by environmental factors affecting the stability of the radiation-induced signal. In emergency scenarios, the choice of biodosimetric technique is often dictated by the context of the exposure, the resources available, and the immediacy with which dose assessments are required. The integration of these methods into a tiered response system can enhance the accuracy and efficiency of radiological emergency responses, ensuring that individuals receive the most appropriate medical intervention.

CHALLENGES AND LIMITATIONS IN CURRENT DOSIMETRY PRACTICES

Small Field Dosimetry

The precise delivery of radiation doses in stereotactic radiosurgery and stereotactic body radiotherapy is paramount due to the high doses used in treatment and the close proximity of the targeted lesions to critical organs and tissues. These treatments often involve the use of small radiation fields to accurately target and treat small or irregularly shaped lesions while minimizing exposure to surrounding healthy tissue. Small field dosimetry, therefore, presents unique challenges that differ significantly from those encountered in conventional radiotherapy, due to factors such as lateral charged particle disequilibrium, beam collimation effects, and the size of the radiation detectors relative to the beam dimensions (50, 147).

A critical aspect of small field dosimetry is the definition of what constitutes a “small field.” Generally, a field is considered small if its dimensions are smaller than the lateral range of charged particles in the medium, which varies according to the medium's density and the beam's energy. For a 6 MV beam in water, for instance, a field size of less than 3 × 3 cm2 is deemed small (147). The challenges inherent in small field dosimetry are largely due to lateral charged particle disequilibrium, which occurs because the range of secondary charged particles produced in the medium can be comparable to or larger than the field size. This disequilibrium affects both the dosimetry and the application of correction factors, necessitating specialized approaches for beam modeling, dose calculation, and verification (50, 148).

The acquisition of accurate beam data for small fields requires careful consideration of the detector's characteristics, including its response to the steep dose gradients and its physical size relative to the field size. Detectors that are too large can average the dose over a volume that is significant in relation to the field size, leading to inaccuracies. Consequently, the choice of detector is crucial, with options including diodes, microchambers, and plastic scintillators being among those recommended for small field measurements (148, 149).

Challenges presented by small field dosimetry necessitate meticulous attention to the details of beam data acquisition, detector selection, beam modeling, and QA protocols. Traditional techniques used to correct for small field involved the cross renormalization with an intermediate field size, colloquially referred to as the ‘daisy-chain’ method. While standard practice for nearly a decade, it’s use has been discouraged through the recommended implementation of the AAPM Report of Task Group 155 (150). The development of task group reports and guidelines by organizations such as the AAPM, IAEA, and ICRU reflects the ongoing efforts within the radiation oncology community to address these challenges and ensure the safe and effective delivery of radiation therapy in small fields.

FLASH/Ultra-High Dose Rate

The advent of FLASH radiotherapy (RT) represents a paradigm shift in cancer treatment, employing ultra-high dose rates (UHDR) of ≥40 Gy/s to potentially revolutionize the therapeutic landscape. This innovative technique has been demonstrated to offer the possibility of sparing normal tissue without compromising tumor control, a feat not easily achieved with conventional radiotherapy methods (151). The distinctive characteristic of FLASH RT—delivering doses at ultrahigh speed—has been accomplished through experimental setups and modified linear accelerators, hinting at a future where FLASH RT could be integrated into standard treatment protocols (151, 152).

The principle behind the FLASH effect—presumably related to oxygen depletion in normal tissues—remains an area of active investigation. This hypothesis suggests that the rapid delivery of radiation leads to a temporary depletion of oxygen, thereby reducing radiation-induced damage in healthy tissues while maintaining the efficacy against tumor cells (152). The involvement of Monte Carlo simulations in FLASH research is instrumental, providing insights into dosimetric calculations and contributing to the design of hardware capable of achieving such high dose rates (151).

The surge in research publications focusing on FLASH RT indicates a burgeoning interest in this field, emphasizing the exploration of its biological underpinnings, physical mechanisms, and potential clinical applications. Studies have indicated that UHDR irradiation may present a protective effect on normal tissues, coined as the “FLASH” effect, suggesting a favorable therapeutic index where tumor control probability (TCP) is maintained or enhanced without a corresponding increase in normal tissue complication probability (NTCP) (152, 153).

While there are no current recommendations for FLASH dosimetry methods or even dosimeters themselves, the challenges to be addressed involve adapting existing technique to the ultra-high dose rate environment, or the utilization of systems that are fully independent of dose-rate. While the ion-chambers are the benchmark dosimeter in standard radiotherapy, ionization-recombination issues maybe address through changes in chamber design. While Fricke dosimeters and calorimeters exhibit dose-rate independence, their implementation requires sophisticated equipment and training (154).

Despite the promising outcomes of in vivo studies and the increasing volume of research, the FLASH effect's precise mechanisms and optimal parameters for clinical translation remain areas of ongoing study. The challenge now lies in deepening the understanding of FLASH RT's radiobiological effects, optimizing delivery modalities, and ensuring accurate dosimetry for such high-intensity treatments. As FLASH RT continues to evolve, further studies are required to establish its role in cancer therapy definitively, including refining simulation tools to accurately model the radiolysis and radiobiological processes involved in FLASH irradiation (151, 153).

Protons and Light Ions

Proton, carbon ion, and other heavy charged particle therapies present a distinct paradigm in radiation oncology, distinguished by their unique spatial dose deposition profiles compared to traditional photon therapy. Megavoltage photons are characterized by a low skin dose with a peak at 1–4 cm depth, followed by a gradual decrease in dose due to the inverse square and exponential attenuation. Conversely, charged particles such as protons and carbon ions deposit a significant portion of their energy in the final centimeters of their path, known as the Bragg peak, resulting in minimal to no exit dose beyond this point. This property theoretically allows for superior sparing of tissues immediately downstream from the target, thereby offering a potential dosimetric advantage over photons (155, 156).

The precision required in particle therapy introduces significant challenges and limitations. The absence of an exit dose, while beneficial, also implies a heightened risk associated with depth misestimations. In photon therapy, a depth discrepancy alters the dose by less than 3% per centimeter past the maximum dose depth, due to the gentle dose gradient. In contrast, for particle therapy, inaccuracies in the anticipated versus actual radiological depth can lead to substantial overdosing or underdosing at the distal edge of the tumor, with potential dose variations ranging from 100% to 0% (undershoot) or 0% to 100% (overshoot) for normal tissues beyond the tumor boundary. Such depth errors are particularly critical given the steep dose gradient beyond the Bragg peak and are exacerbated by the enhanced biological effectiveness of protons at low energies (157).

Despite the high accuracy in determining the beam energy exiting the treatment head and the water-equivalent range (158), patient-specific factors such as anatomical changes, setup errors, or depth calculation inaccuracies can introduce uncertainties. Clinical protocols thus conservatively assume up to a 3.5% range uncertainty plus an additional 1–3 mm margin to account for these factors, affecting treatment planning and limiting the utilization of particle therapy's full potential (155).

The quest for in vivo verification of particle dose deposition has led to exploring various technologies, including positron emission tomography (PET) for detecting proton-activated positron-emitting nuclei, prompt gamma-ray detection techniques, and thermoacoustic-based methods (159, 160). These investigations aim to enhance the safety and efficacy of particle therapy by providing real-time or near-real-time verification of the dose delivered to the patient, thereby potentially mitigating the uncertainties associated with particle range and enhancing treatment robustness (161).

Neutron Dosimetry

Neutron interactions, crucial to understanding radiation biology, depend on the initial energy of the neutrons and the characteristics of the matter they encounter. These interactions are pivotal as each collision results in energy modifications that influence biological outcomes. As Krane emphasizes, neutron interactions are categorized into elastic scattering, inelastic scattering, and capture reactions, with each type leading to different radiation doses based on the recoil and emitted particles' energies (162). These nuances are fundamental in calculating absorbed doses, where the energy and atom type within the exposed material determine the energy transfer to charged particles.

The dose estimation process, outlined by Thomas, revolves around the concept of kerma—kinetic energy released in matter, which helps gauge the initial energies of all charged particles instigated by uncharged radiation like neutrons and photons per unit mass (163). This calculation is sensitive to the elemental composition of the interacting matter, further complicating dose assessments in heterogeneous biological tissues. Neutron energy-dependent dose conversion factors are then employed to derive dose estimates from neutron fluence (164).

The concept of equivalent dose integrates absorbed dose with a radiation weighting factor, which reflects the relative biological effectiveness (RBE) of different radiation types. This integration is crucial as it accounts for the variable biological impacts of identical absorbed doses from different radiations (165). However, Rossi and Zaider (166) note that these terms, initially designed for radiation protection, focus more on preventing stochastic long-term effects rather than acute responses, making their application in risk assessment somewhat contentious (167).

Detection and measurement challenges further complicate studies involving neutron and mixed-field exposures. Real-time detection capabilities and responsiveness to a broad spectrum of neutron energies are essential, especially given the high-energy neutron activities above 20 MeV expected in space radiation environments (164). Neutron detectors, such as rem meters and tissue equivalent proportional counters, although effective in certain settings, often face limitations in accurately assessing neutron doses in shielded conditions from fission neutrons.

For comprehensive radiobiological studies, precise reporting of radiation parameters, including neutron/photon spectra, peak neutron energies, and gamma-ray energies, is imperative. This specificity helps contextualize the results, particularly in mixed-field reactor exposures where neutron-to-gamma ratios or neutron dose percentages can significantly impact study outcomes(167). Additionally, the type of dose reported, whether it be free-in-air, midline tissue, or bone marrow doses, can substantially affect the interpretation of gamma-ray exposures. The disparity is even more pronounced with neutron doses due to their limited range in tissue, necessitating detailed reporting of doses for each critical target organ. Lastly, the impact of dose rate or fractionation on biological effects, which can vary significantly with different dose rates, must always be delineated to ensure accurate translation of experimental results into practical applications.

Quality Assurance Standardization and International Protocols

The quest for optimal radiation therapy outcomes necessitates stringent QA protocols and adherence to international standards. In this context, the establishment of Accredited Dosimetry Calibration Laboratories (ADCLs) marks a pivotal advancement in the standardization and calibration of radiation therapy equipment, ensuring consistent and accurate dose delivery to patients (42, 43). Following the American Association of Physicists in Medicine's (AAPM) initiative in 1971, ADCLs emerged as fundamental components in the calibration hierarchy, bridging the gap between national standards and end-user instruments in a clinical setting (55). These laboratories are accredited by entities such as the AAPM, and they operate under rigorous quality assurance programs to provide dosimetry calibration services for a wide array of radiation therapy equipment. The calibrations they perform are traceable to the National Institute of Standards and Technology, ensuring uniformity of measurement.

The services provided by ADCLs include the calibration of ionization chambers, electrometers, and other dosimetry devices across the spectrum of therapeutic radiology, from low-energy X ray units to high-energy linear accelerators. With a standardization of the calibration techniques, the uncertainty of the calibration factors provided by the laboratories is between 1–2%. Any discrepancies or inaccuracies in dose calibration can lead to either an underdose, which might result in ineffective treatment, or an overdose, which can cause unnecessary damage to healthy tissues. Thus, ADCLs have a direct impact on patient outcomes and play a crucial role in the advancement of radiation therapy techniques. The evolution of ADCLs over the years has paralleled advances in dosimetry technology, incorporating state-of-the-art equipment and adopting new methodologies to ensure that their calibration services remain at the forefront of precision and accuracy.

Standardization and Quality Control in Dosimetry

The evolution and importance of dosimetry standardization and quality control are crucial aspects of radiation therapy, influencing treatment efficacy and patient safety. The establishment of standardized protocols, such as those developed by the AAPM Task Group 51 (24) and its successors, and the implementation of interlaboratory comparisons and benchmarking through initiatives like the Imaging and Radiation Oncology Core (IROC), have significantly contributed to this field's progress (24).

AAPM Task Group 155 has expanded the groundwork laid by TG-51 by focusing on megavoltage photon beam dosimetry in small fields and non-equilibrium conditions. This specific attention to small fields and non-equilibrium conditions reflects the increasing complexity of modern radiation therapy techniques, such as SRS and SBRT, which require precise dose calculations in small volumes that may not be in equilibrium (150).

In Europe, the European Society for Radiotherapy & Oncology (ESTRO) plays a similar role to that of the AAPM in the United States, particularly in dosimetry standardization. ESTRO has established several task groups to address the complexities and challenges of modern radiation therapy dosimetry. One notable effort is the work of the ESTRO Physics Committee, which has developed guidelines analogous to those of AAPM Task Group 51. For instance, the TRS-398 protocol, developed by the International Atomic Energy Agency (IAEA) in collaboration with ESTRO, serves as a cornerstone for the calibration and dosimetry of radiotherapy beams across Europe. This protocol provides a standardized approach for the determination of absorbed dose in water, reflecting the shift from air-kerma-based to water-kerma-based dosimetry, aligning with global efforts to harmonize radiation therapy practices (168).

The Role and Evolution of the IROC

The Imaging and Radiation Oncology Core (IROC), historically known as the Radiological Physics Center (RPC), has played a pivotal role in auditing dosimetry practices across institutions involved in National Cancer Institute (NCI) cooperative clinical trials since 1968. The RPC, spurred by both the AAPM and the Committee on Radiation Therapy Studies, aimed to ensure that participating institutions could administer clinically comparable and consistent radiation doses. This objective has been pursued through a variety of auditing tools, including on-site dosimetry reviews, the use of OSLDs/TLDs, and the evaluation of anthropomorphic phantoms and patient treatment plans. The transition from the TG-21 protocol to the TG-51 protocol marked a critical step in standardizing dosimetry calibration across institutions, with current compliance rates for beam calibration reaching near 98% for both photons and electron beams. This transition underlines the industry's shift towards more reliable and standardized dosimetry practices.

Interlaboratory comparisons and benchmarking play a crucial role in maintaining dosimetry quality and consistency. Through the IROC, institutions are subject to rigorous audits that include both direct on-site evaluations and remote assessments using OSLDs/TLDs and anthropomorphic phantoms. These comparisons not only ensure adherence to national standards but also foster continuous improvement in dosimetry practices by highlighting areas for enhancement.

The transformation from RPC to IROC reflects the dynamic nature of radiation therapy and the continuous efforts to improve quality assurance in this field. Over the decades, the expansion of cooperative clinical trial groups and the introduction of new radiation therapy modalities have necessitated the evolution of quality assurance measures. Today, IROC stands as a comprehensive entity overseeing nearly 2,000 radiotherapy facilities, providing an integrated approach to radiation oncology and diagnostic imaging quality control in support of NCI's National Clinical Trials Network.

Advances in dosimetry standardization and quality control, spearheaded by entities such as AAPM Task Group 155 and the IROC, exemplify the ongoing commitment to optimizing radiation therapy's safety and effectiveness. The meticulous work of these groups ensures that radiation doses are administered precisely and consistently, ultimately aiming to improve clinical outcomes for cancer patients globally.

THE FUTURE OUTLOOK OF DOSIMETRY

The future of dosimetry is being reshaped by groundbreaking advancements in technology and materials science. These developments promise not only enhanced precision in dose measurements but also the potential for innovative applications in complex radiation fields. This section considers only a few of the emerging technologies and innovations in dosimetry, such as EPR dosimetry, LET track OLSD, prompt gamma cameras, radio-acoustic dosimetry, and the pivotal role of nanotechnology, particularly the use of graphene.

One such promising development is EPR dosimetry, offering high precision in dose measurement through the detection of radiation-induced free radicals in crystalline or organic materials (169). Another innovative approach is LET track OLSD, which enables the detailed mapping of radiation tracks at the microscopic level, providing invaluable data for understanding radiation effects on a granular scale (170).

The use of prompt gamma imaging in proton therapy represents a leap forward in real-time treatment verification, allowing for immediate adjustments based on the captured gamma emissions following proton interactions, thereby enhancing treatment accuracy (171). Radio-acoustic dosimetry introduces a novel method by correlating acoustic signals generated by the ionizing radiation with the absorbed dose, presenting a unique avenue for dose measurement without direct interaction with the radiation field (172).

The incorporation of nanotechnology, especially through materials like graphene, is set to revolutionize dosimetry by offering sensors with exceptional sensitivity, rapid response times, and the potential for miniaturization and integration into complex systems (147, 173). These advancements collectively signify a future where dosimetry not only achieves higher accuracy and efficiency but also plays a crucial role in the development of advanced radiation therapy techniques and safety protocols.

Electron Paramagnetic Resonance Dosimetry

Electron Paramagnetic Resonance (EPR) dosimetry has evolved significantly since its inception, becoming a cornerstone in the field of radiation research and therapy. Initially, EPR dosimetry was pivotal in studying free radicals, which play a critical role in understanding the effects of ionizing radiation on biological systems. The method's inception can be traced back to foundational work by Janzen in 1971 (175), who extensively reviewed the spin trapping technique, allowing for the detailed study of transient free radicals in irradiated biological molecules (176, 177). This technique marked a significant advance in radiation research, enabling scientists to elucidate the primary processes of radiation interaction with living cells.

Over the years, the application of EPR dosimetry has expanded, driven by advancements in EPR techniques, including the development of continuous wave (CW) and pulsed EPR, which offer high resolution and sensitivity. These advancements have facilitated the study of complex paramagnetic species and their behaviors under irradiation, further enriching our understanding of radiation's biological impacts (178).

Methodological Advances

Electron paramagnetic resonance dosimetry has undergone significant methodological advancements since its inception, markedly enhancing our ability to study the intricate effects of radiation on biological systems. The transition from CW to pulsed EPR represents one of the most pivotal developments in this field. While CW-EPR has been instrumental in the initial exploration of radiation-induced free radicals, pulsed EPR has introduced the capability to observe the dynamics of these radicals with much greater temporal resolution. This methodological shift has facilitated a deeper understanding of the transient processes that occur immediately after irradiation, providing insights into the initial stages of radiation damage to biological molecules (178).

Concomitant with the evolution from CW to pulsed EPR, there have been significant strides in the development of detection methods that offer high resolution and sensitivity. These advancements are critical in the context of radiation research, where the detection of minute quantities of radiation-induced radicals can elucidate the mechanisms of radiation interaction with DNA, proteins, and other biomolecules.

The collective impact of these methodological advancements in EPR dosimetry on the study of radiation effects on biological molecules cannot be overstated. By enabling the detailed observation and analysis of radiation-induced free radicals, these developments have deepened our understanding of the fundamental mechanisms by which ionizing radiation interacts with living cells. This, in turn, has implications for improving radiation therapy techniques and developing strategies to mitigate radiation damage, underscoring the pivotal role of EPR dosimetry in advancing both radiation research and clinical applications.

The inception and development of alanine-EPR dosimetry were significantly bolstered by investments from National Metrology Institutes (NMIs) across the globe. These investments underscored a concerted effort to standardize and elevate the precision of radiation dosimetry. As a result, alanine dosimetry emerged as the system of choice for calibration services, gaining widespread acceptance and adoption across various high-dose application industries, from healthcare sectors involving blood product treatment and medical device sterilization to food preservation and aerospace device testing (179).

EPR in High-Dose Dosimetry

Integrating EPR into dosimetry laboratories was not devoid of challenges, particularly in overcoming technological and financial barriers associated with adopting a then-novel method. The transition from optical dosimetry methods to EPR required significant adjustments, including the acquisition of sophisticated equipment and the development of technical expertise. Despite these hurdles, the superior benefits of alanine dosimetry—such as its robustness to environmental conditions, wider dose measurement range, and higher precision—ultimately facilitated its integration into dosimetry labs and industry practices. The robust nature of the alanine dosimeter, requiring no special handling and well suited for a variety of industrial environments, provided a compelling case for its adoption over traditional optical techniques. In clinical and industrial settings, EPR dosimetry, particularly through alanine dosimeters, has found a broad spectrum of applications. Its utility spans from ensuring the safety and efficacy of blood product treatments and the sterilization of medical devices to enhancing food preservation techniques and ensuring the reliability of components in aerospace testing. The traceability of alanine dosimetry measurements to national standards has been pivotal in facilitating international commerce, enabling a reliable exchange of goods and services that adhere to rigorous safety and quality benchmarks (180).

LET Track OSLD

As the field of radiation dosimetry advances, the integration of emerging technologies and innovations stands at the forefront of transforming how radiation measurement and safety are approached. One of the significant leaps in this realm is the development and application of LET track OSLD, which heralds a new era in the precision and flexibility of radiation dosimetry.

The radiation dosimetry community has long been in pursuit of a dosimeter that addresses the myriad limitations associated with current passive detector technology. The ideal passive integrating detector would be sensitive to charged particles across a wide spectrum of LET values, necessitating minimal to no post-exposure chemical processing, capable of non-destructive (i.e., multiple) readouts using fully automated equipment, and offering the potential to be erased and reused. Traditional TLD and OSLD, despite their full reusability and high sensitivity to low-LET radiation, fall short in measuring high-LET radiation from heavy charged particles (HCP) efficiently and exhibit minimal to no sensitivity to neutrons (66, 82).

A groundbreaking development in overcoming these limitations is the introduction of a novel Al2O3 fluorescent nuclear track detector (FNTD) by Landauer, Inc. This innovation has showcased sensitivity and functionality surpassing existing nuclear track detectors. The foundation of FNTD technology lies in single crystals of aluminum oxide doped with carbon and magnesium, featuring aggregate oxygen vacancy defects (Al2O3:C, Mg). This composition induces radiation-generated color centers that absorb light at 620 nm and emit fluorescence at 750 nm with high quantum yield and a remarkably short fluorescence lifetime of approximately 75 ± 5 ns (181).

The non-destructive readout of this detector is executed using a confocal fluorescence microscope, allowing for the three-dimensional spatial distribution of fluorescence intensity along the trajectory of a heavy charged particle. This capability enables the reconstruction of particle trajectories through the crystal, where LET can be ascertained as a function of distance along the trajectory based on fluorescence intensity. The advantages of Al2O3:C, Mg FNTD over conventionally processed CR-39 plastic nuclear track detectors are manifold, including superior spatial resolution, an expanded range of LET sensitivity, the obviation of post-irradiation chemical processing, and the detector's capacity for annealing and reuse (182). Preliminary experiments have verified that the material has a low-LET threshold of <1 keV/µm, does not reach saturation at LETs in water as high as 1,800 keV/µm, and can withstand irradiation to fluences exceeding 106 cm2 without saturation (track overlap) (66, 82).

This evolution in dosimetry technology, spearheaded by the advent of LET tracking OSLD, paves the way for enhanced accuracy, efficiency, and applicability of dosimetric assessments across a broader spectrum of radiation types and energies. As these advancements continue to be integrated into practical applications, they promise to significantly impact radiation safety protocols, treatment planning, and monitoring, ensuring that dosimetry remains at the cutting edge of radiation science and safety.

Prompt Gamma Imaging Systems

In the realm of charged particle radiotherapy, the application of prompt gamma camera (PGC) technology represents a significant advancement. This technology revolves around the principles of detecting secondary radiation emissions, specifically prompt gamma rays, during the delivery of proton and ion beam therapies. These secondary emissions result from nuclear interactions within the patient, offering a non-invasive means to verify the range of charged particles with high precision.

The primary energy loss mechanism for protons and ions in tissue is through collisions with atomic electrons, with direct interactions with nuclei playing a minor role in the overall stopping power. It is these nuclear interactions, albeit less frequent, that are pivotal for range verification methods. These inelastic collisions can alter target nuclei, leading to a sequence of de-excitation processes that emit secondary radiation, including prompt gamma rays, which can escape the patient and be detected externally (183, 184).

The efficacy of range verification via prompt gamma imaging (PGI) hinges on the successful detection of these secondary emissions. Prompt gamma rays exhibit an energy spectrum extending up to 10 MeV, interacting with materials primarily through Compton scattering or pair production, thus necessitating sophisticated detection systems designed to capture these high-energy photons (185, 186).

The temporal characteristics of nuclear interactions, governed by the strong force, and the subsequent de-excitation processes, ranging from sub-nanoseconds to several minutes, play a crucial role in the detection strategy. For prompt gamma-based range verification to be successful, the detection system must efficiently capture the fleeting moments of these emissions (187, 188).

Developments in PGC technology have focused on enhancing the sensitivity and specificity of these systems to accurately correlate the detected prompt gamma emissions with the Bragg peak location, thereby ensuring the precision of charged particle therapies. This includes innovations in detector design, signal processing algorithms, and integration with treatment planning systems to provide real-time feedback on beam range and energy deposition (189191).

Recent studies have underscored the potential of PGC in improving the accuracy of range verification in proton therapy, demonstrating its capacity to detect deviations in the Bragg peak location and adjust treatment parameters accordingly (186, 192).

The ongoing refinement of PGC technology, including the development of more compact, efficient, and scalable systems, is anticipated to facilitate broader adoption in clinical settings. These advancements promise to enhance the safety and effectiveness of proton and ion beam therapies, paving the way for more personalized and precise cancer treatment modalities (193, 194).

Radio-Acoustic Dosimetry

Radio-acoustic dosimetry represents a forefront in the field of radiation therapy, particularly proton therapy, offering a non-invasive, real-time method for verifying the range of charged particle beams through the detection of thermoacoustic signals. These signals are generated when the energy deposited by proton beams is converted into heat, causing thermal expansion and thus acoustic emissions detectable as pressure waves (195, 196). This phenomenon, referred to as proton-acoustics, provides a direct correlation between the acoustic signal amplitude and the energy deposited, making it a promising technique for ensuring the accuracy of proton therapy (197, 198).

The application of thermoacoustic principles to dosimetry, especially in proton therapy, addresses a critical challenge: range uncertainty. The ability of proton-acoustics to accurately locate the Bragg peak, the point at which the majority of the proton beam's energy is deposited, could significantly enhance treatment efficacy and patient safety by confirming that the radiation dose is delivered precisely to the tumor, minimizing exposure to surrounding healthy tissues (155157).

Despite its potential, several challenges impede the clinical translation of proton-acoustics. The primary obstacles include low signal levels, influenced by the proton pulse structure of clinical accelerators, and the variability in sound speed across different tissues, which can introduce errors in determining the Bragg peak location. Additionally, acoustic transmission through air pockets and bone is limited, posing difficulties in signal detection (199, 200).

Recent advancements in proton therapy technology and proton-acoustic signal detection methods have begun to address these challenges. Modifications in proton beam delivery, aimed at optimizing the time structure of proton pulses, have shown promise in enhancing signal generation and detection. Developments in signal processing and detector design, including the use of clinical ultrasound arrays and advanced computational algorithms, are improving the sensitivity and specificity of proton-acoustic range verification (201203).

The integration of proton-acoustics with other imaging modalities, such as ultrasound and optoacoustic imaging, offers a multi-modal approach to range verification, potentially overcoming limitations related to tissue heterogeneity and sound speed discrepancies. This synergy could provide a more robust and accurate method for proton range verification (204, 205).

Advanced Radiopharmaceutical Dosimetry

The landscape of radiopharmaceutical therapy (RPT) is rapidly advancing with the potential for patient-specific dosimetry to significantly improve clinical outcomes. As Wehrmann et al. (206) and Eberlein et al. (207) suggest, the incorporation of advanced imaging and computational tools into dosimetry has the potential to tailor therapies to the individual patient, enabling more precise and effective treatment protocols.

Patient-specific dosimetry stands at the forefront of personalized medicine in RPT, where the accurate calculation of absorbed doses can lead to the optimization of therapeutic efficacy and the reduction of toxicity. This personalized approach considers individual variability in pharmacokinetics and radiobiological effects, facilitating dose optimization for both tumor control and protection of normal tissues.

The introduction of new radiopharmaceuticals expands the therapeutic options available for treating a variety of cancers and other diseases. As the range of targetable molecular pathways grows, so does the need for dosimetric models that can accurately predict the distribution and effects of these agents within the body. Advancing imaging techniques such as PET/ CT, SPECT, and novel software that incorporates artificial intelligence can refine dosimetric calculations and streamline the therapeutic planning process.

The integration of computational tools, including machine learning algorithms, offers the promise of improved dosimetric models that can learn from a vast array of patient data to make more accurate predictions. These tools can help to overcome the challenges associated with heterogeneous distributions of radiopharmaceuticals within tumors and across different patient anatomies.

Nanotechnological Applications in Dosimetry

The integration of nanotechnology into the realm of radiation dosimetry, particularly through the use of graphene, is paving the way for unprecedented advancements in the accuracy, sensitivity, and versatility of dosimetric applications. Nanomaterials, with their unique physical and chemical properties, have the potential to revolutionize radiation detection and measurement, offering new avenues for both research and clinical applications in radiation therapy.

Graphene, a two-dimensional carbon nanomaterial, has garnered significant attention in recent years for its remarkable properties, including high electrical conductivity, mechanical strength, and thermal conductivity. These characteristics make graphene an ideal candidate for enhancing the performance of dosimetric devices. Its ability to conduct electricity can be sensitively altered by the presence of ionizing radiation, allowing for the precise measurement of radiation doses (208, 209).

The application of graphene in radiation dosimetry exploits its conductivity changes when exposed to radiation, serving as the basis for developing highly sensitive and fast-response dosimeters. This sensitivity to radiation, combined with graphene's mechanical robustness, enables the creation of flexible, durable dosimeters that can be used in a variety of radiation therapy settings, including challenging environments where traditional dosimeters may fail (210, 211).

Graphene's versatility allows for the development of novel dosimetric systems that can provide real-time monitoring of radiation dose rates, potentially improving the safety and efficacy of radiation therapy treatments. By incorporating graphene-based sensors into wearable devices, researchers are exploring the possibility of continuous, in vivo radiation monitoring, which could lead to more personalized and adaptive radiation therapy regimens (212, 213).

The integration of graphene into dosimetric applications is not without challenges. The fabrication of graphene-based sensors requires sophisticated techniques to maintain the material's integrity and functionality. Additionally, the response of graphene to different types of radiation (e.g., alpha particles, beta particles, gamma rays) must be thoroughly understood and calibrated to ensure accurate dose measurements. Despite these challenges, ongoing research is focusing on overcoming these hurdles through innovative material engineering and device design (214, 215).

The use of graphene, and nanotechnology in general, in radiation dosimetry is an exciting and rapidly evolving field. The unique properties of graphene offer promising opportunities for enhancing dosimetric measurements' accuracy, sensitivity, and versatility. As research in this area continues to advance, graphene-based dosimeters may soon become a staple in radiation therapy.

ADVANCED SMALL ANIMAL IRRADIATION SYSTEMS

While the tendency is for techniques in modern medicine to be established first in animal studies prior to their permitted use in the clinical environment, small animal irradiation is ripe for a reversal of these techniques. Many small-animal irradiation systems remain fixed beam irradiators, designed for whole-body coverage, and operating at energies lower than those generally utilized within the modern radiotherapy clinic. While cost is likely the driving factor hindering a technological advancement in this area of research, the modern radiation delivery techniques and technologies now outpace those seen in general animal studies by decades.

Intensity modulated radiation therapy stands as a transformative approach in the sphere of radiation therapy, especially within the realm of small animal irradiation. Its core principle revolves around modulating the radiation dose with high precision, allowing for a higher concentration of radiation to be delivered to the tumor while sparing the surrounding healthy tissue to an unprecedented degree. The intricacy of IMRT is further enhanced by CT image-based treatment planning, which serves as the crux for tailoring the radiation dose according to the tumor's three-dimensional shape. The utilization of computed tomography (CT) in treatment planning introduces a layer of exactitude in dose calculation and distribution, as it provides detailed insights into the anatomical context of the target volume and surrounding organs-at-risk (216).

Integration of CT imaging allows clinicians to delineate the tumor and critical structures with remarkable accuracy, enabling a conformal dose distribution that is sculpted according to the patient's unique anatomy. This level of customization in treatment planning is pivotal for small animal pre-clinical research, as it ensures the reproducibility and validity of the findings when translated to clinical scenarios (217).

Benchmarking Systems by Phantom Studies

Enhanced precision in preclinical radiotherapy necessitates the advancement and standardization of dosimetric tools such as phantoms, particularly to attain an Imaging and Radiation Oncology Core (IROC) equivalent for small animal irradiation studies. Simple geometry phantoms offer a replicable and cost-effective means to standardize dosimetry across institutions, enabling comparative studies without the logistical hindrances of phantom exchange (218). These phantoms, often composed of a single material like acrylic or polystyrene, mimic the radiological environment of the subjects under study and allow for straightforward dosimeter and radiochromic film insertion (219).

Advancements from the Centers for Medical Countermeasures against Radiation (CMCR) Radiation Physics Core (RPC) have introduced heterogeneity into simple geometry phantoms to better represent animal anatomy, still without complicating manufacturing processes excessively (220). These modified phantoms support a more realistic simulation of tissue responses while maintaining the straightforward production and high reproducibility essential for inter-institutional studies.

In contrast, the fidelity to animal anatomies and morphologies is significantly heightened in the mouse-morphic (rodentia-morphic) phantoms, which pose a production challenge due to their complexity and cost (221). These intricate models, such as the single-use 3D Presage phantoms, are less accessible for routine use due to the permanent alterations caused by radiation exposure (222). The recent proliferation of 3D printing has mitigated these challenges, reducing costs and complexity in creating high-fidelity phantoms (223).

The CMCR dosimetric project capitalized on this technology, creating a mouse-morphic phantom from micro-CT data of a C57Bl/6 mouse, leading to a dosimetric validation within 1.2% accuracy against Monte Carlo dose calculations when assuming ABS (acrylonitrile-butadiene-styrene) tissue equivalence (224). The negligible inter-phantom variation assessed by weight underscores the precision and reproducibility of these models, fortifying their utility in standardizing dosimetry for preclinical research (225).

Such standardization efforts in phantom studies are crucial for translating preclinical findings into clinical settings. They serve as a quality assurance bedrock, ensuring that the dosimetric data obtained from small animal studies are reliable and applicable for further research and clinical application, thereby aligning with the broader objectives of radiation oncology (226, 227).

In the wider scope, the move towards an IROC equivalent for small animal irradiation research embodies a commitment to rigor and reproducibility, with the aim to solidify the translational pipeline from laboratory bench to bedside. The development of standardized and anatomically accurate phantoms is not merely a technical endeavor but a pivotal step in this translational process, promising to elevate the quality of preclinical radiation studies and by extension, the reliability of subsequent clinical trials (228, 229).

CONCLUSION

In 2024 there is renewed appreciation for the need for standardization and rigor in dose delivery in radiobiological investigations. Whereas this has long been achieved in radiation medicine through standards and requirements emanating from a variety of regulatory and certifying entities, there are no such authoritative bodies to dictate for non-clinical purposes. To fill this void, groups in the US and in Europe have worked to develop a framework for what is required for quality research in the preclinical setting. At least a portion of the impetus stems from the continuing shift from radioactive source-based irradiators to the use of X rays, which introduces several dosimetric and dose distribution aspects that must be taken into consideration. One point where demand for rigor and standardization can be imposed is at the funding level. Funding agencies may require that adequate detail be included in applications, allowing peer reviewers to confirm that experiments are designed correctly and with proper attention to procedures and dose measurements. Once funded, there can be ongoing requirements to follow agreed upon standards in the conduct of the experiments. An example of this is the recent work from the Compatibility of Irradiation Research Protocols Expert Roundtable (CIRPER) which resulted in a set of guidelines for recommended methodological disclosure requirements (230). The critical details surrounding dosimetry and dose delivery must also be included in the dissemination of the results. To this end, parallel commentaries published in Radiation Research (231) and in the International Journal of Radiation Biology (232) provided minimum reporting standards that should be expected when publishing radiobiological research in the literature.

Also pending are the results of AAPM Task Group 319, formed to establish guidelines for accurate dosimetry in radiation biology experiments. The hope is that by requiring adherence to certain standards when applying for funding, extended to mandates to follow agreed upon protocols in the conduct of the research, and culminating with the inclusion of a minimum set of details at publication, there will be sufficient peer pressure to drive the field towards improved dosimetry and dosimetry standards, and a harmonization as to how radiobiological research is conducted.

It has been seventy years since the first article in a new journal of Radiation Research (2) laid out a few simple dosimetric requirements that a biologist should address when conducting experiments with ionizing radiation. Even while pessimistic that these requirements had been met, there was a recognition of the need to strive towards the goal of reliable, reproducible, and accurate dosimetry as a foundation for quality research in the field. Today, with many new methods and techniques available for the measurement of dose with precision and accuracy, but also with many new challenges as new delivery methods are explored, that need remains. With communication and collaboration between biologists and physicists, from design through conduct of experiments and on to reporting of the results, the radiation research community can be optimistic that rigor and standardization in dosimetry will assure the quality of preclinical radiobiological research and radiation medicine in the future.

©2024 by Radiation Research Society. All rights of reproduction in any form reserved.

REFERENCES

1.

Lovell S, Simpson L, An Introduction to Radiation Dosimetry: Cambridge University Press; 1979. Google Scholar

2.

Fano U, Introductory remarks on the dosimetry of ionizing radiations. Radiat Res. 1954; 1:3–9. Google Scholar

3.

Marinelli LD, X-ray dosimetry: General principles and experimental factors. Radiat Res. 1954; 1:23–33. Google Scholar

4.

Moyer BJ, Neutron physics of concern to the biologist. Radiat Res. 1954; 1:10–22. Google Scholar

5.

Riesen H, Liu Z, Optical storage phosphors and materials for ionizing radiation. Current Topics in Ionizing Radiation Research, IntechOpen, 2012. Google Scholar

6.

Bohler G, Spode E, Vormum G, Radiation protection problems in working with radioactive isotopes. I. General fundamentals and danger possibilities from external irradiation. Chem. Tech (Berlin), 11. 1959. Google Scholar

7.

Huda W, Radiation dosimetry in diagnostic radiology. Am J Roentgen. 1997; 169:1487–88. Google Scholar

8.

Williamson JF, Li Z, Dolan J, Monte Carlo aided dosimetry of the microselectron pulsed and high dose-rate 192Ir sources. Med Phys. 1995; 22:209–34. Google Scholar

9.

Dolan J, Li Z, Williamson JF, Monte Carlo and experimental dosimetry of an I125 brachytherapy seed. Med Phys. 2006; 33: 4675–84. Google Scholar

10.

Holmes SM, DeWerd LA, Micka JA, Experimental determination of the radial dose function of Sr90/Y90 IVBT sources. Med Phys. 2006; 33:3379–86. Google Scholar

11.

Karaiskos P, Angelopoulos A, Baras P, Sakelliou L, Sandilos P, Dardoufas K, Vlachos L, A Monte Carlo investigation of the dosimetric characteristics of the VariSource 192Ir high dose rate brachytherapy source. Med Phys. 1999; 26:1498–504. Google Scholar

12.

Wallace RE, Fan JJ, Evaluation of a new brachytherapy iodine-125 source by AAPM TG43 formalism. Med Phys. 1998; 25:2225–32. Google Scholar

13.

Pradhan AS, Gopalakrishnan AK, Shirva VK, Iyer PS, A TLD method for evaluation of radiation quality and measurement of entrance skin dose from diagnostic x ray practices. Radiat Prot Dosim. 1992; 40:49–52. Google Scholar

14.

Bartlett DT, A review of Japanese bomb dosimetry. Radiation Protection Dosimetry 1982; 2:127–39. Google Scholar

15.

Collins PJ, Gorbatkov D, Schultz FW, A Graphical User Interface For Diagnostic Radiology Dosimetry Using Monte Carlo (MCNP) Simulation. 10th International Congress of the International Radiation Protection Association, IRPA, 2000. Google Scholar

16.

Sulieman A, Kappas K, Theodorou K, Entrance and peripheral dose measurements during radiotherapy. Nat Conf Biomed Phys Eng, Bulgaria, 2008. Google Scholar

17.

Fano U, Note on the Bragg-Gray cavity principle for measuring energy dissipation. Radiat Res. 1954; 1:237–40. Google Scholar

18.

Spencer LV, Attix FH, A theory of cavity ionization. Radiation research 1955; 3. Google Scholar

19.

Andreo P, Brahme A, Stopping power data for high-energy photon beams. Phys Med Biol. 1986; 31:839–58. Google Scholar

20.

Berger MJ, Seltzer SM, Stopping powers for electrons and positrons. Report No. 34 of the Commission on Radiation Units and Measurements, 1982, Washington, DC. Google Scholar

21.

Seltzer SM, Berger MJ, Improved bremsstrahlung cross sections for transport calculations. IEEE Trans Nucl Sci. 1983; 30:4368–70. Google Scholar

22.

Horowitz YS, Photon general cavity theory. Radiat Prot Dosim. 1984; 9:5–18. Google Scholar

23.

Burlin TE, A general theory of cavity ionisation. Br J Radiol. 1966; 39:727–34. Google Scholar

24.

Almond PR, Biggs PJ, Coursey BM, Hanson WF, Huq MS, Nath R, et al. AAPM's TG-51 protocol for clinical reference dosimetry of high-energy photon and electron beams. Med Phys. 1999; 26:1847–70. Google Scholar

25.

Ma CM, Coffey CW, DeWerd LA, Liu C, Nath R, Seltzer SM, et al. AAPM protocol for 40-300 kV x-ray beam dosimetry in radiotherapy and radiobiology. Med Phys. 2001; 28:868–93. Google Scholar

26.

Rodrigues AE, Toward Accurate Small Animal Dosimetry and Irradiator Quality Assurance. PhD dissertation, Duke University; 2012. Google Scholar

27.

Abogunde M, Toncheva G, Anderson-Evans C, Craciunescu O, Steffey B, Dewhirst M, et al. Dose comparison between AAPM TG-61 protocol and MOSFET-based phantom dosimetry. Health Phys Soc Spring Meeting, 2009. Google Scholar

28.

Aird EGA, Farmer FT, The design of a thimble chamber for the Farmer dosimeter. Phys Med Biol. 1972; 17:169–74. Google Scholar

29.

DeWerd L, Mackie R, “Comment on Comparison of ionization chambers of various volumes for IMRT absolute dose verification.” Med Phys. 2003; 30:119–23. Google Scholar

30.

Rogers DWO, Bielajew AF, Wall attenuation and scatter corrections for ion chambers: Measurements versus calculations. Phys Med Biol. 1990; 35:1065–78. Google Scholar

31.

Podgorsak EB, Radiation Oncology Physics: A Handbook for Teachers and Students: International Atomic Energy Agency; 2005. Google Scholar

32.

DeWerd LA, Wagner LK, Characteristics of radiation detectors for diagnostic radiology. Appl Radiat Isot. 1999; 50:125–36. Google Scholar

33.

Fricke H, Morse S, The chemical action of roentgen rays on dilute ferrous sulfate solutions as a measure of dose. Am J Roentgen Radium Ther Nucl Med, 1927; 18. Google Scholar

34.

Rosado P, Salata C, David M, Mantuano A, Pickler A, Mota CL, et al. Determination of the Absorbed Dose to Water for Medium-Energy X-ray Beams Using Fricke Dosimetry. Med Phys. 2020; 47:5802–9. Google Scholar

35.

Domen SR, A sealed water calorimeter for measuring absorbed dose. J Res Natl Inst Stand Technol. 1994; 99:121–41. Google Scholar

36.

Domen SR, Lamperti PJ, A heat-loss-compensated calorimeter: Theory, design, and performance. J Res Natl Bur Stand A Phys Chem. 1974; 78A:595–610. Google Scholar

37.

Duane S, Aldehaybes M, Bailey M, Lee N, Thomas C, Palmans H, An absorbed dose calorimeter for IMRT dosimetry. Metrologia. 2012; 49; S168–S173. Google Scholar

38.

Medin J, Ross C, Stucki G, Klassen N, Seuntjens J, Commissioning of an NRC-type sealed water calorimeter at METAS using 60Co γ-rays. Phys Med Biol. 2004; 49:4073–86. Google Scholar

39.

Bass G, Shipley D, Flynn S, Thomas R, A prototype low-cost secondary standard calorimeter for reference dosimetry with ultra-high pulse dose rates. Br J Radiol. 2023; 96:20220638. Google Scholar

40.

Messenger GC, Ash MS, The Effects of Radiation on Electronic Systems. Van Nostrand Reinhold Company Inc Co.; 1986. Google Scholar

41.

Sze SM, Ng KK, Physics of Semiconductor Devices, 3rd ed: John Wiley & Sons, Inc; 2006. Google Scholar

42.

Carrara M, Cutajar D, Alnaghy S, Espinoza A, Romanyukha A, Presilla S, et al. Semiconductor real-time quality assurance dosimetry in brachytherapy. Brachytherapy 2018; 17:133–45. Google Scholar

43.

Rosenfeld AB, Novel detectors for silicon-based microdosimetry, their concepts and applications. Nucl Instrum Meth Phys Res A. 2016; 809:156–70. Google Scholar

44.

Bradley PD, Rosenfeld AB, Zaider M, Solid state microdosimetry. Nucl Instrum Meth Phys Res B. 2001; 184:135–57. Google Scholar

45.

Jones AR, The application of some direct current properties of silicon junction detectors to γ-ray dosimetry. Phys Med Biol. 1963; 8:451–9. Google Scholar

46.

Grusell E, Rikner G, Radiation damage induced dose rate non-linearity in an n-type silicon detector. Acta Oncologica 1984; 23:465–9. Google Scholar

47.

Grusell E, Rikner G, Selective shielding of a P-Si detector for quality independence. Acta Oncologica 1985; 24:65–69. Google Scholar

48.

Li XA, Ma CM, Salhani D, Measurement of percentage depth dose and lateral beam profile for kilovoltage X-ray therapy beams. Phys Med Biol. 1997; 42:2561–68. Google Scholar

49.

Saini AS, Zhu TC, Energy dependence of commercially available diode detectors for in-vivo dosimetry. Med Phys. 2007; 34: 1704–11. Google Scholar

50.

Alfonso R, Andreo P, Capote R, Huq MS, Kilby W, et al. A new formalism for reference dosimetry of small and nonstandard fields. Med Phys. 2008; 35:5179–86. Google Scholar

51.

Marre D, Marinello G. Comparison of p-type commercial electron diodes for in vivo dosimetry. Med Phys. 2004; 31:50–6. Google Scholar

52.

Rikner G, Grusell E, General specifications for silicon semiconductors for use in radiation dosimetry. Phys Med Biol. 1987; 32: 1109–17. Google Scholar

53.

Grusell E, Rikner G, Linearity with dose rate of low resistivity P-type silicon semiconductor detectors. Phys Med Biol. 1993; 38:785–92. Google Scholar

54.

Jursinic PA, Dependence of diode sensitivity on the pulse rate of delivered radiation. -Med Phys. 2013; 40:021720. Google Scholar

55.

Andreo P, Burns DT, Nahum AE, Seuntjens J, Attix FH, Fundamentals of Ionizing Radiation Dosimetry ; 2017. Google Scholar

56.

Bruzzi M, Novel silicon devices for radiation therapy monitoring. Nucl Instrum Meth Phys Res A. 2016; 809:105–12. Google Scholar

57.

Petasecca M, Alhujaili S, Aldosari AH, et al. Angular independent silicon detector for dosimetry in external beam radiotherapy. Med Phys. 2015; 42:4708–18. Google Scholar

58.

Poch W, Holmes-Siedle A, The dosimeter: a new instrument to measure radiation dose. RCA Engineers, 1970; 16:56–59. Google Scholar

59.

Holmes-Siedle A, The space-charge dosimeter. Nucl Instrum Meth. 1974; 121:169–79. Google Scholar

60.

Adams L, Holmes-Siedle A, The development of an MOS dosimetry unit for use in space. IEEE Trans Nucl Sci. 1978; 25: 1607–12. Google Scholar

61.

61. International Atomic Energy Agency. Dosimetry of Small Static Fields Used in External Beam Radiotherapy. Technical Report Series No. 483. 2017; 211. Google Scholar

62.

Consorti R, Petrucci A, Fortunato F, Soriani A, Marzi S, Iaccarino G, et al. In vivo dosimetry with MOSFETs: dosimetric characterization and first clinical results in intraoperative radiotherapy. Int J Radiat Oncol Biol Phys. 2005; 63:952–60. Google Scholar

63.

Ciocca M, Piazzi V, Lazzari R, Vavassori A, Luini A, Veronesi P, et al. Real-time in vivo dosimetry using micro-MOSFET detectors during intraoperative electron beam radiation therapy in early-stage breast cancer. Radiother Oncol. 2006; 78:213–16. Google Scholar

64.

Kron T, Thermoluminescence dosimetry and its applications in medicine-Part 1: Physics, materials and equipment. Australas Phys Eng Sci Med. 1994; 17:175–99. Google Scholar

65.

Pai S, Das IJ, Dempsey JF, Lam KL, Losasso TJ, Olch AJ, et al. TG-69: Radiographic film for megavoltage beam dosimetry. Med Phys. 2007; 34:2228–58. Google Scholar

66.

Cameron JR, Suntharalingam N, Kenney GN, Thermoluminescence Dosimetry. University of Wisconsin Press; 1968. Google Scholar

67.

Kry SF, Alvarez P, Cygler JE, DeWerd LA, Howell RM, Meeks SL, et al. AAPM TG 191: Clinical use of luminescent dosimeters: TLDs and OSLDs. Med Phys. 2020; 47:e19–e51. Google Scholar

68.

Devic S, Radiochromic film dosimetry: Past, present, and future. Physica Medica 2011; 27:122–34. Google Scholar

69.

Devic S, Seuntjens J, Sham E, Podgorsak EB, Schmidtlein CR, Kirov AS, Soares CG, Precise radiochromic film dosimetry using a flat-bed document scanner. Med Phys. 2005; 32:2245–53. Google Scholar

70.

Butson MJ, Peter KN, Cheung T, Metcalfe P, Radiochromic film for medical radiation dosimetry. Mater Sci Eng: R: Rep. 2003; 41:61–120. Google Scholar

71.

Butson MJ, Cheung T, Yu PKN, Dosimetry for IMRT using radiochromic film. Radiother Oncol. 2003; 68:307–13. Google Scholar

72.

Lewis D, Micke A, Yu X, Chan MF, An efficient protocol for radiochromic film dosimetry combining calibration and measurement in a single scan. Med Phys. 2012; 39:6339–50. Google Scholar

73.

Palmer AL, Di Pietro P, Alobaidli S, Issa F, Doran S, Bradley D, Nisbet A, Comparison of methods for the measurement of radiation dose distributions in high dose rate (HDR) brachytherapy: Ge-doped optical fiber, EBT3 Gafchromic film, and PRESAGE radiochromic plastic. Med Phys. 2013; 40:061707. Google Scholar

74.

Cusumano D, Fumagalli ML, Marchetti M, Fariselli L, De Martin E, Dosimetric verification of stereotactic radiosurgery/ stereotactic radiotherapy dose distributions using Gafchromic EBT3. Med Dosim. 2015; 40:226–31. Google Scholar

75.

Fiandra C, Fusella M, Giglioli FR, Filippi AR, Mantovani C, Ricardi U, Ragona R, Comparison of Gafchromic EBT2 and EBT3 for patient-specific quality assurance: Cranial stereotactic radiosurgery using volumetric modulated arc therapy with multiple noncoplanar arcs. Med Phys. 2013; 40:082105. Google Scholar

76.

Chen C, Tang S, Mah D, Chan M, SU-E-T-286: Dose verification of spot-scanning proton beam using GafChromic EBT3 film. Med Phys. 2015; 42(6Part16): 3399. Google Scholar

77.

Micke A, Lewis DF, Yu X, Multichannel film dosimetry with nonuniformity correction. Med Phys. 2011; 38:2523–34. Google Scholar

78.

Kairn T, Hardcastle N, Kenny J, Meldrum R, Tomé WA, Aland T, EBT2 radiochromic film for quality assurance of complex IMRT treatments of the prostate: Micro-collimated IMRT, RapidArc, and TomoTherapy. Australas Phys Eng Sci Med. 2011; 34:333–43. Google Scholar

79.

Marrazzo L, Zani M, Pallotta S, Arilli C, Casati M, Compagnucci A, et al. GafChromic® EBT3 films for patient-specific IMRT QA using a multichannel approach. Physica Medica 2015; 31:1035–42. Google Scholar

80.

Devic S, Seuntjens J, Abdel-Rahman W, Evans M, Olivares M, Podgorsak EB, et al. Accurate skin dose measurements using radiochromic film in clinical applications. Med Phys. 2006; 33: 1116–24. Google Scholar

81.

Daniels F, Boyd C, Saunders D, Thermoluminescence as a research tool. Science 1953; 117:343–49. Google Scholar

82.

Oberhofer M, Scharmann A, Applied Thermoluminescence Dosimetry. lectures of a course held at the Joint Research Centre, Ispra, Italy, 12–16 November1979. 1981. Google Scholar

83.

Moscovitch M, Horowitz YS, Thermoluminescent materials for medical applications: LiF: Mg, Ti and LiF: Mg, Cu, P. Radiat Meas. 2006; 41:S71–S77. Google Scholar

84.

Kron T, Haworth A, Williams I, Dosimetry and medical radiation physics: Closing the gap. Physica Medica 2014; 30:709–15. Google Scholar

85.

McKay KG, Electron bombardment conductivity in diamond. Phys Rev 1948; 74:1606–21. Google Scholar

86.

Planskoy B, Evaluation of diamond radiation dosemeters. Phys Med Biol. 1980; 25:519–32. Google Scholar

87.

Hoban PW, Heydarian M, Beckham WA, Beddoe AH, Dose rate dependence of a PTW diamond detector in the dosimetry of a 6 MV photon beam. Phys Med Biol. 1994; 39:1219–29. Google Scholar

88.

Laub WU, Kaulich TW, Nüsslin F, A diamond detector in the dosimetry of high-energy electron and photon beams. Phys Med Biol. 1999; 44:2183–92. Google Scholar

89.

Fidanzio A, Azario L, Kalish R, Avigal Y, Conte G, Ascarelli P, et al. A preliminary dosimetric characterization of chemical vapor deposition diamond detector prototypes in photon and electron radiotherapy beams. Med Phys. 2005; 32:389–95. Google Scholar

90.

Descamps C, Tromson D, Tranchant N, Isambert A, Bridier A, De Angelis C, et al. Clinical studies of optimised single crystal and polycrystalline diamonds for radiotherapy dosimetry. Radiat Meas. 2008; 43:933–8. Google Scholar

91.

Ciancaglioni I, Marinelli M, Milani E, Prestopino G, Verona C, Verona-Rinati G, et al. Dosimetric characterization of a synthetic single crystal diamond detector in clinical radiation therapy small photon beams. Med Phys. 2012; 39:4493–501. Google Scholar

92.

Laub WU, Crilly R, Clinical radiation therapy measurements with a new commercial synthetic single crystal diamond detector. J Appl Clin Med Phys. 2014; 15:4890. Google Scholar

93.

Di Venanzio C, Marinelli M, Milani E, Prestopino G, Verona C, Verona-Rinati G, et al. Characterization of a synthetic single crystal diamond Schottky diode for radiotherapy electron beam dosimetry. Med Phys. 2013; 40:021712. Google Scholar

94.

Tromson D, Rebisz-Pomorska M, Tranchant N, Isambert A, Moignau F, Moussier A, et al. Single crystal CVD diamond detector for high resolution dose measurement for IMRT and novel radiation therapy needs. Diam Relat Mater. 2010; 19: 1012–6. Google Scholar

95.

De Coste V, Francescon P, Marinelli M, Masi L, Paganini L, Pimpinella M, et al. Is the PTW 60019 microDiamond a suitable candidate for small field reference dosimetry? Phys Med Biol. 2017; 62:7036–55. Google Scholar

96.

O'Brien DJ, León-Vintró L, McClean B, Small field detector correction factors kQclin, Qmsr (fclin, fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods. Med Phys. 2016; 43:411. Google Scholar

97.

Abtahi SM, Aghamiri SMR, Khalafi H, Optical and MRI investigations of an optimized acrylamide-based polymer gel dosimeter. J Radioanal Nucl Chem 2014; 300 287–301. Google Scholar

98.

Senden RJ, De Jean P, McAuley KB, Schreiner LJ, Polymer gel dosimeters with reduced toxicity: a preliminary investigation of the NMR and optical dose-response using different monomers. Phys Med Biol. 2006; 51:3301–14. Google Scholar

99.

Maryanski MJ, Gore JC, Kennan RP, Schulz RJ, NMR relaxation enhancement in gels polymerized and cross-linked by ionizing radiation: a new approach to 3D dosimetry by MRI. Magn Reson Imaging 1993; 11:253–58. Google Scholar

100.

Hilts M, Audet C, Duzenli C, Jirasek A, Polymer gel dosimetry using x-ray computed tomography: a feasibility study. Phys Med Biol. 2000; 45:2559–71. Google Scholar

101.

Maryanski MJ, Ibbott GS, Eastman P, Schulz RJ, Gore JC. Radiation therapy dosimetry using magnetic resonance imaging of polymer gels. Med Phys. 1996; 23:699–705. Google Scholar

102.

Mather ML, Whittaker AK, Baldock C, Ultrasound evaluation of polymer gel dosimeters. Phys Med Biol. 2002; 47:1449–58. Google Scholar

103.

Ibbott GS, Applications of gel dosimetry. J Physics Conf Series, 2004; 3:58. Google Scholar

104.

Abtahi S, Aghamiri S, Khalafi H, Rahmani F, An investigation into the potential applicability of gel dosimeters for dosimetry in boron neutron capture therapy. Int J Radiat Res 2014; 12:139–49. Google Scholar

105.

Abtahi S, Characteristics of a novel polymer gel dosimeter formula for MRI scanning: dosimetry, toxicity and temporal stability of response. Med Phys. 2016; 32:1156–61. Google Scholar

106.

Khezerloo D, Nedaie HA, Takavar A, Zirak A, Farhood B, Movahedinejhad H, et al. PRESAGE® as a solid 3-D radiation dosimeter: a review article. Radiat Phys Chem 2017; 141:88–97. Google Scholar

107.

Low AD, Moran JM, Dong L, Oldham M, Dosimetry tools and techniques for IMRT. Med Phys. 2011; 38:1313–38. Google Scholar

108.

Hussein M, Rowshanfarzad P, Ebert MA, Nisbet A, Clark CH, A critical evaluation of the PTW 2D-ARRAY seven29 and OCTAVIUS II phantom for IMRT and VMAT verification. J Appl Clin Med Phys. 2013; 14:274–92. Google Scholar

109.

Allgaier B, Schüle E, Würfel J, Dose reconstruction in the OCTAVIUS 4D phantom and in the patient without using dose information from the TPS. White Paper, 2013. Google Scholar

110.

Sadagopan R, Bencomo JA, Martin RL, Nilsson G, Matzen T, Balter PA, Characterization and clinical evaluation of a novel IMRT quality assurance system. J Appl Clin Med Phys. 2009; 10:104–19. Google Scholar

111.

Nelms BE, Opp D, Robinson J, Wolf TK, Zhang G, Moros E, et al. VMAT QA: Measurement-guided 4D dose reconstruction on a patient. Med Phys. 2012; 39:4228–38. Google Scholar

112.

Knoll GF, Radiation Detection and Measurement. John Wiley & Sons, Inc.; 2010. Google Scholar

113.

Tsoulfanidis N, Landsberger S, Measurement and Detection of Radiation. CRC Press; 2015. Google Scholar

114.

Beaulieu L, Beddar S, Review of plastic and liquid scintillation dosimetry for photon, electron, and proton therapy. Phys Med Biol. 2016; 61:R305–R343. Google Scholar

115.

Lambert J, McKenzie DR, Law S, Elsey J, Suchowerska N, plastic scintillation dosimeter for high dose rate brachytherapy. Phys Med Biol. 2006; 51:5505–16. Google Scholar

116.

Wang LLW, Perles LA, Archambault L, Sahoo N, Mirkovic D, Beddar S, Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams. Phys Med Biol. 2012; 57:7767–81. Google Scholar

117.

Darafsheh A, Taleei R, Kassaee A, Finlay JC, The visible signal responsible for proton therapy dosimetry using bare optical fibers is not Čerenkov radiation. Med Phys. 2015; 43:5973–80. Google Scholar

118.

Robertson D, Mirkovic D, Sahoo N, Beddar S, Quenching correction for volumetric scintillation dosimetry of proton beams. Phys Med Biol. 2013; 58:261–73. Google Scholar

119.

Boivin J, Beddar S, Guillemette M, Beaulieu L, Systematic evaluation of photodetector performance for plastic scintillation dosimetry. Med Phys. 2015; 42:6211–20. Google Scholar

120.

Guillot M, Beaulieu L, Archambault L, Beddar S, Gingras L, A new water-equivalent 2D plastic scintillation detectors array for the dosimetry of megavoltage energy photon beams in radiation therapy. Med Phys. 2011; 38:6763–74. Google Scholar

121.

Glaser AK, Zhang R, Davis SC, Gladstone DJ, Pogue BW, Time-gated Cherenkov emission spectroscopy from linear accelerator irradiation of tissue phantoms. Opt Lett. 2012; 37:1193–95. Google Scholar

122.

Snyder C, Pogue BW, Jermyn M, Tendler I, Andreozzi JM, Bruza P, et al. Algorithm development for intrafraction radiotherapy beam edge verification from Cherenkov imaging. J Med Imag. 2018; 5:015001. Google Scholar

123.

Hachadorian R, Bruza P, Jermyn M, Mazhar A, Cuccia D, Jarvis L, et al. Correcting Cherenkov light attenuation in tissue using spatial frequency domain imaging for quantitative surface dosimetry during whole breast radiation therapy. J Biomed Opt. 2018; 24:1–10. Google Scholar

124.

Glaser AK, Zhang R, Gladstone DJ, Pogue BW, Optical dosimetry of radiotherapy beams using Cherenkov radiation: The relationship between light emission and dose. Phys Med Biol. 2014; 59:3789–811. Google Scholar

125.

Newman F, Asadi-Zeydabadis M, Durairaj V, Ding M, Stuhr K, Kavanagh B, Visual sensations during megavoltage radiotherapy to the orbit attributable to Cherenkov radiation. Med Phys. 2008; 35:77–80. Google Scholar

126.

Fazio GG, Jelley JV, Charman WN, Generation of Cherenkov light flashes by cosmic radiation within the eyes of the Apollo astronauts. Nature 1970; 228:260–4. Google Scholar

127.

Meredith WJ, Radium Dosage: The Manchester System. Edinburgh: E&S Livingstone Ltd; 1947. Google Scholar

128.

Marinelli LD, Dosage determinations with radioactive isotopes. Am J Roentgenol Radium Ther. 1942; 47:210–6. Google Scholar

129.

Marinelli LD, Quimby EH, Hine GJ, Dosage determination with radioactive isotopes: II. Practical considerations in therapy and protection. Am J Roentgenol Radium Ther. 1948; 59:260–81. Google Scholar

130.

Mayneord WV, Energy Absorption: IV. The mathematical theory of integral dose in radium therapy. Br J Radiol. 1945; 18:12–9. Google Scholar

131.

Loevinger R, Berman M, A Schema for absorbed-dose calculations for biologically-distributed radionuclides. MIRD Pamphlet No.1. J Nucl Med. 1968; 9:7–14. Google Scholar

132.

Loevinger R, Calculation of radiation dosage in internal therapy with I-131. OSAEC Conference. Atomic Energy Commission, 1955. Google Scholar

133.

Brownell GL, Ellet WH, Reddy AR, Absorbed fractions for photon dosimetry MIRD Pamphlet 3. J Nucl Med. 1968; 9:27–39. Google Scholar

134.

Fisher HR, Snyder WS, Distribution of dose in the body from a source of gamma rays distributed uniformly in an organ. In: Proceedings of the First International Congress in Radiation Protection. 1966: 1473–86. Google Scholar

135.

Warner GG, Poston JW, Snyder WS, Absorbed dose in phantoms which represent various aged male humans from external sources of photons as a function of age. Health Phys. 1975; 28:599–603. Google Scholar

136.

Cloutier RJ, Snyder WS, Watson EE, Pregnant woman model for absorbed fraction calculations. IVth International Congress of the International Radiation Protection Association. 1977. Google Scholar

137.

137. Central Intelligence Agency Directorate of International Terrorist CBRN: Materials and Effects; 2003. Google Scholar

138.

Turai I, Darroudi F, Lloyd D, The new IAEA manual on cytogenetic biodosimetry. In: Ricks RC, Berger ME, O'Hara FM editors. The medical basis for radiation-accident preparedness. Place Parthenon Publishing Group: Parthenon Publishing Group; 2001. Google Scholar

139.

Romm H, Wilkins RC, Coleman CN, Lillis-Hearne PK, Pellmar TC, Livingston GK, et al. Biological dosimetry by the triage dicentric chromosome assay: potential implications for treatment of acute radiation syndrome in radiological mass casualties. Radiat Res. 2011; 175:397–404. Google Scholar

140.

Fattibene P, Wojcik A, Biodosimetric tools for a fast triage of people accidentally exposed to ionising radiation. Ann Ist Super Sanità, 2009; 45:231–312. Google Scholar

141.

Horn S, Barnard S, Rothkamm K, Gamma-H2AX-based dose estimation for whole and partial body radiation exposure. PLoS One 2011; 6:e25113. Google Scholar

142.

Blakely WF, Salter CA, Prasanna PG, Early-response biological dosimetry-recommended countermeasure enhancements for mass-casualty radiological incidents and terrorism. Health Phys 2005; 89:494–504. Google Scholar

143.

Kulka U, Ainsbury L, Atkinson M, Barquinero JF, Barrios L, Beinke C, et al. Realising the European Network in Biological Dosimetry (RENEB). Radiat Prot Dosimetry 2012; 151:621–5. Google Scholar

144.

Fattibene P, Trompier F, Wieser A, Brai M, Ciesielski B, De Angelis C, et al. EPR dosimetry intercomparison using smart phone touch screen glass. Rad Environ Biophys 2014; 53:311–20. Google Scholar

145.

Trompier F, Bassinet C, Della Monaca S, Romanyukha A, Reyes R, Clair, Overview of physical and biophysical techniques for accident dosimetry. Rad Prot Dos 2010; 144:571–4. Google Scholar

146.

Voisin P, Barquinero JF, Blakely B, Lindholm C, Lloyd D, Luccioni C, et al. Towards a standardization of biological dosimetry by cytogenetics. Cellular Molec Biol. 2002; 48:501–04. Google Scholar

147.

Das IJ, Ding GX, Ahnesjö A, Small fields: Nonequilibrium radiation dosimetry. Med Phys. 2012; 39:446–55. Google Scholar

148.

Charles PH, Crowe SB, Kairn T, Kenny J, Knight RT, Langton CM, et al. Practical aspects of quality assurance in radiation therapy. Australas Phys Eng Sci Med. 2020; 43, 619–35. Google Scholar

149.

Scott AJ, Kumar S, Nahum AE, Fenwick JD, Characterizing the influence of detector density on dosimeter response in non-equilibrium small photon fields. Phys Med Biol. 2017; 62: 5800–21. Google Scholar

150.

Das IJ, Francescon P, Moran JM, Ahnesjö A, Aspradakis MM, Cheng CW, et al. Report of AAPM Task Group 155: Megavoltage photon beam dosimetry in small fields and non-equilibrium conditions. Med Phys. 2021; 48:e886–e921. Google Scholar

151.

Gao Y, Liu R, Chang CW, Charyyev S, Zhou J, Bradley JD, et al. A potential revolution in cancer treatment: A topical review of FLASH radiotherapy. J Appl Clin Med Phys. 2022; 23: e13790. Google Scholar

152.

Favaudon V, Caplier L, Monceau V, Pouzoulet F, Sayarath M, Fouillade C, et al. Ultrahigh dose-rate FLASH irradiation increases the differential response between normal and tumor tissue in mice. Sci Transl Med. 2014; 6:245ra93. Google Scholar

153.

Marcu LG, Bezak E, Peukert DD, Wilson P, Translational research in FLASH radiotherapy-from radiobiological mechanisms to in vivo results. Biomedicines. 2021; 9:181. Google Scholar

154.

Romano F, Bailat C, Jorge PG, Lerch MLF, Darafsheh A, Ultrahigh dose rate dosimetry: challenges and opportunities for FLASH radiation therapy. Med Phys. 2022; 49:4912–32. Google Scholar

155.

Paganetti H, Range uncertainties in proton therapy and the role of Monte Carlo simulations. Phys Med Biol. 2012; 57:R99–R117. Google Scholar

156.

Knopf AC, Lomax A, In vivo proton range verification: a review. Phys Med Biol. 2013; 58:R131–R160. Google Scholar

157.

Paganetti H, Relative biological effectiveness (RBE) values for proton beam therapy. Variations as a function of biological endpoint, dose, and linear energy transfer. Phys Med Biol. 2014; 59: R419–72 Google Scholar

158.

Clasie B, Depauw N, Fransen M, Gomà C, Panahandeh HR, Seco J, et al. Golden beam data for proton pencil-beam scanning. Phys Med Biol. 2012; 57:1147–58. Google Scholar

159.

Knopf A, Parodi K, Bortfeld T, Shih HA, Paganetti H, Systematic analysis of biological and physical limitations of proton beam range verification with offline PET/CT scans. Phys Med Biol. 2009; 54:4477–95. Google Scholar

160.

Polf JC, Avery S, Mackin DS, Beddar S, Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: Feasibility studies for range verification. Phys Med Biol. 2015; 60:7085–99. Google Scholar

161.

Unkelbach J, Paganetti H, Robust proton treatment planning: Physical and biological optimization. Semin Radiat Oncol. 2018; 28:88–96. Google Scholar

162.

Krane KS, Introductory Nuclear Physics. 3rd ed. Toronto, Canada: John Wiley & Sons, Inc; 1988. Google Scholar

163.

Thomas DJ, ICRU report 85: fundamental quantities and units for ionizing radiation. Radiat Prot Dosim. 2012; 150:550–2. Google Scholar

164.

Heilbronn LH, Borak TB, Townsend LW, Tsai PE, Burnham CA, McBeth RA, Neutron yields and effective doses produced by Galactic Cosmic Ray interactions in shielded environments in space. Life Sci Space Res. 2015; 7:90–9. Google Scholar

165.

165. The 2007 recommendations of the International Commission on Radiological Protection. ICRP Publication 103. Ann ICRP 2007; 37:1–332. Google Scholar

166.

Rossi HH, Zaider M, Microdosimetry and its Applications. Berlin/New York: Springer; 1996. Google Scholar

167.

Fisher DR, Fahey FH, Appropriate use of effective dose in radiation protection and risk assessment. Health Phys 2017; 113:102–09. Google Scholar

168.

168. Absorbed Dose Determination in External Beam Radiotherapy: An International Code of Practice for Dosimetry Based on Standards of Absorbed Dose to Water. Vienna: International Atomic Energy Agency; 2000. Google Scholar

169.

Zhivulin VE, Pesin LA, Ivanov DV, Special aspects of the temperature dependence of EPR absorption of chemically carbonized polyvinylidene fluoride derivatives. Phys. Solid State. 2016; 58:86–90. Google Scholar

170.

Sykora GJ, Akselrod MS, Vanhavere F, Performance of fluorescence nuclear track detectors in mono-energetic and broad spectrum neutron fields. Radiat. Meas. 2009; 44:988–91. Google Scholar

171.

Perali I, Celani A, Bombelli L, Fiorini C, Camera F, Clementel E, et al. Prompt gamma imaging of proton pencil beams at clinical dose rate. Phys Med Biol. 2014; 59:5849–71. Google Scholar

172.

Patch SK, Hoff DE, Webb TB, Sobotka LG, Zhao T, Two-stage ionoacoustic range verification leveraging Monte Carlo and acoustic simulations to stably account for tissue inhomogeneity and accelerator–specific time structure–A simulation study. Med Phys 2018; 45:783–93. Google Scholar

173.

Urbanová V, Holá K, Bourlinos AB, Čépe K, Ambrosi A, Loo AH, et al. Thiofluorographene–hydrophilic graphene derivative with semiconducting and genosensing properties. Adv Mater. 2015; 27:2305–10. Google Scholar

174.

Gu M, Liu, Y, Chen T, Du F, Zhao X, Xiong C, et al. Is graphene a promising nano-material for promoting surface modification of implants or scaffold materials in bone tissue engineering? Tissue Eng Part B: Rev. 2014; 20:477–91. Google Scholar

175.

Janzen EG, Spin trapping. Acc Chem Res. 1971; 4:31–40. Google Scholar

176.

Regulla D, From dating to biophysics-20 years of progress in applied ESR spectroscopy. Appl Radiat Isot. 2000; 52:1023–30. Google Scholar

177.

Regulla DF, Deffner U, Dosimetry by ESR spectroscopy of alanine. Int J Appl Rad Isot. 1982; 33:1101–14. Google Scholar

178.

Fattibene P, Callens F, EPR dosimetry with tooth enamel: A review. Appl Radiat Isot. 2010; 68:2033–116. Google Scholar

179.

Mijnheer BJ, Battermann JJ, Wambersie A, What degree of accuracy is required and can be achieved in photon and neutron therapy. Radiother Oncol. 1987; 8:237–52. Google Scholar

180.

Dische S, Saunders MI, Williams C, Hopkins A, Aird E, Precision in reporting the dose given in a course of radiotherapy. Med Dosim. 1994; 19:188. Google Scholar

181.

Akselrod GM, Akselrod MS, Benton ER, Yasuda N, A novel Al2O3 fluorescent nuclear track detector for heavy charged particles and neutrons. Nucl Instrum Meth Phys. 2006; 247:295–306. Google Scholar

182.

Akselrod M, Kouwenberg J, Fluorescent nuclear track detectors–review of past, present and future of the technology. Radiat Measur. 2018; 117:35–51. Google Scholar

183.

Kraan AC, Range verification methods in particle therapy: Underlying physics and Monte Carlo modeling. Front Oncol. 2015; 5. Google Scholar

184.

Dedes G, Pinto M, Dauvergne D, Freud N, Krimmer J, Létang JM, et al. Assessment and improvements of Geant4 hadronic models in the context of prompt-gamma hadrontherapy monitoring. Phys Med Biol. 2014; 59:1747–72. Google Scholar

185.

Min CH, Kim CH, Youn MY, Kim JW, Prompt gamma measurements for locating the dose falloff region in the proton therapy. Appl Phys Lett. 2006; 89. Google Scholar

186.

Verburg JM, Riley K, Bortfeld T, Seco J, Energy-and time-resolved detection of prompt gamma-rays for proton range verification. Phys Med Biol. 2013; 58:L37–L49. Google Scholar

187.

Testa E, Bajard M, Chevallier M, Dauvergne D, Le Foulher F, Freud N, et al. Monitoring the Bragg peak location of 73 MeV/u carbon ions by means of prompt γ-ray measurements. Appl Phys Lett. 2008; 93. Google Scholar

188.

Testa M, Bajard M, Chevallier M, Dauvergne D, Freud N, Henriquet P, et al. Real-time monitoring of the Bragg-peak position in ion therapy by means of single photon detection. Radiat Environ Biophys. 2010; 49:337–43. Google Scholar

189.

Böhlen TT, Cerutti F, Chin MP, Fassò A, Ferrari A, Ortega PG, et al. The FLUKA Code: Developments and challenges for high energy and medical applications. Nuclear Data Sheets. 2014; 120:211–4. Google Scholar

190.

Allison J, Amako, K, Apostolakis J, Arce P, Asai M, Aso T, et al. Recent developments in Geant4. Nucl Instrum Methods Phys Res A. 2016; 835:186–225. Google Scholar

191.

191. MCNP version 6.2 release notes. Los Alamos, NM: Los Alamos National Laboratory (LANL); 2018. Google Scholar

192.

Pinto M, Dauvergne D, Freud N, Krimmer J, Létang JM, Testa E, Assessment of Geant4 prompt-gamma emission yields in the context of proton therapy monitoring. Front Oncol. 2016; 6:10. Google Scholar

193.

Hueso-González F, Golnik C, Berthel M, Dreyer A, Enghardt W, Fiedler F, et al. Test of Compton camera components for prompt gamma imaging at the ELBE bremsstrahlung beam. J Instrum. 2014; 9:05002. Google Scholar

194.

Polf JC, Panthi R, Mackin DS, McCleskey M, Saastamoinen A, Roeder BT, et al. Measurement of characteristic prompt gamma rays emitted from oxygen and carbon in tissue-equivalent samples during proton beam irradiation. Phys Med Biol. 2013; 58: 5821–31. Google Scholar

195.

Jones KC, Witztum A, Sehgal CM, Avery S, Proton beam characterization by proton-induced acoustic emission: Simulation studies. Phys Med Biol. 2014; 59:6549–63. Google Scholar

196.

Ahmad M, Xiang L, Yousefi S, Xing L, Theoretical detection threshold of the proton-acoustic range verification technique. Med Phys. 2015; 42:5735–44. Google Scholar

197.

Assmann W, Kellnberger S, Reinhardt S, Lehrack S, Edlich A, Thirolf PG, et al. Ionoacoustic characterization of the proton Bragg peak with submillimeter accuracy. Med Phys. 2015; 42: 567–74. Google Scholar

198.

Alsanea F, Moskvin V, Stantz KM, Feasibility of RACT for 3D dose measurement and range verification in a water phantom. Med Phys. 2015; 42:937–46. Google Scholar

199.

Jones KC, Sehgal CM, Avery S, How proton pulse characteristics influence protoacoustic determination of proton-beam range: Simulation studies. Phys Med Biol. 2016; 61:2213–42. Google Scholar

200.

Jones KC, Stappen FV, Bawiec CR, Janssens G, Lewin PA, Prieels D, et al. Experimental observation of acoustic emissions generated by a pulsed proton beam from a hospital-based clinical cyclotron. Med Phys. 2015; 42:7090–7. Google Scholar

201.

Patch SK, Kireeff Covo M, Jackson A, Qadadha YM, Campbell KS, Albright RA, et al. Thermoacoustic range verification using a clinical ultrasound array provides perfectly co-registered overlay of the Bragg peak onto an ultrasound image. Phys Med Biol. 2016; 61:5621–38. Google Scholar

202.

Lehrack S, Assmann W, Bertrand D, Henrotin S, Herault J, Heymans V, et al. Submillimeter ionoacoustic range determination for protons in water at a clinical synchrocyclotron. Phys Med Biol. 2017; 62:L20–L30. Google Scholar

203.

Jones KC, Nie W, Chu JCH, Kassaee A, Sehgal CM, Avery S, Acoustic-based proton range verification in heterogeneous tissue: simulation studies. Phys Med Biol. 2018; 63:025018. Google Scholar

204.

Kellnberger S, Assmann W, Lehrack S, Reinhardt S, Thirolf P, Queirós D, et al. Ionoacoustic tomography of the proton Bragg peak in combination with ultrasound and optoacoustic imaging. Sci Rep. 2016; 6:29305. Google Scholar

205.

Nie W, Jones KC, Petro S, Kassaee A, Sehgal CM, Avery S, Proton range verification in homogeneous materials through acoustic measurements. Phys Medi Biol. 2018; 63:025036. Google Scholar

206.

Wehrmann C, Senftleben S, Zachert C, Müller D, Baum RP, Results of individual patient dosimetry in peptide receptor radionuclide therapy with 177Lu DOTATATE and 177Lu DOTANOC. Cancer Biother Radiopharm. 2007; 22:406–16. Google Scholar

207.

Eberlein U, Cremonesi M, Lassmann M, Individualized dosimetry for theranostics: necessary, nice to have, or counterproductive? J Nucl Med. 2017; 58:97S–103S. Google Scholar

208.

Aberg I, Karami P, Aberg D, Ekspong J, Jornten-Karlsson M, Radiation sensitivity of graphene-based electronic devices. Radiat Phys Chem. 2011; 80:23–27. Google Scholar

209.

Rattfalt L, Lindahl OA, Berggren M, Geladi P, Graphene-based materials in electrochemistry. Chem Soc Rev. 2013; 42:6067–103. Google Scholar

210.

Kumar S, Pavelyev V, Mishra P, Tripathi S, Jain SC, Graphene-based materials and their composites as coatings. Acta Materialia. 2017; 129:1–16. Google Scholar

211.

Lee J, Kim J, Kim S, Min DH, Graphene-based flexible and stretchable electronics. Advanced Materials. 2019; 31. Google Scholar

212.

Zhao W, Karp JM, Ferrari M, Serda RE, Bioengineering of injectable preformed scaffolds with shape-memory properties. Nature Protocols. 2015; 10:1672–84. Google Scholar

213.

Bianco A, Cheng HM, Enoki T, Gogotsi Y, Hurt RH, Koratkar N, et al. All in the graphene family-A recommended nomenclature for two-dimensional carbon materials. Carbon. 2018; 65:1–6. Google Scholar

214.

Chen G, Wang Y, Yang M, Xu Z, Gao W, Pan C, et al. Graphene-based materials for hydrogen generation from light-driven water splitting. Advanced Materials. 2016; 28:5738–46. Google Scholar

215.

Yang Y, Asiri AM, Tang Z, Du D, Lin Y, Graphene based materials for biomedical applications. Materials Today. 2018; 16: 365–73. Google Scholar

216.

Mackie TR, Kapatoes J, Ruchala K, Lu W, Wu C, Olivera G, et al. Image guidance for precise conformal radiotherapy. Int J Radiat Oncol, Biol, Phys. 2003; 56:89–105. Google Scholar

217.

McBain CA, Henry AM, Sykes J, Amer A, Marchant T, Moore CM, et al. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on treatment imaging. Int J Radiat Oncol Biol Phys. 2006; 64:625–34. Google Scholar

218.

Chow JC, Leung MK, Lindsay PE, Jaffray DA, Dosimetric variation due to the photon beam energy in the small-animal irradiation: a Monte Carlo study. Med Phys 2010; 37:5322–29. Google Scholar

219.

Pedersen KH, Kunugi KA, Hammer CG, Culberson WS, DeWerd LA, Radiation biology irradiator dose verification survey. Radiat Res. 2016; 185:163–8. Google Scholar

220.

Desrosiers M, DeWerd L, Deye J, Lindsay P, Murphy MK, et al. The importance of dosimetry standardization in radiobiology. J Res Natl Inst Stand Technol. 2013; 118:403–18. Google Scholar

221.

Brodin NP, Chen Y, Yaparpalvi R, Guha C, Tome WA, Dosimetry formalism and implementation of a homogenous irradiation protocol to improve the accuracy of small animal whole-body irradiation using a 137Cs irradiator. Health Phys. 2016; 110:26–38. Google Scholar

222.

Olding T, Schreiner LJ, Cone-beam optical computed tomography for gel dosimetry II: imaging protocols. Phys Med Biol. 2011; 56:1259–73. Google Scholar

223.

Kairn T, Crowe S, Markwell T, Use of 3D Printed Materials as Tissue-Equivalent Phantoms. World Congress on Medical Physics and Biomedical Engineering. 2015. Google Scholar

224.

Bache ST, Juang T, Belley MD, Koontz BF, Adamovics J, Yoshizumi TT, et al. Investigating the accuracy of microstereotactic-body-radiotherapy utilizing anatomically accurate 3D printed rodent-morphic dosimeters. Med Phys. 2015; 42:846–55. Google Scholar

225.

Newton J, Oldham M, Thomas A, Li Y, Adamovics J, Kirsch DG, et al. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques. Med Phys. 2011; 38:6754–62. Google Scholar

226.

van der Merwe D, Van Dyk J, Healy B, Zubizarreta E, Izewska J, Mijnheer B, et al. Accuracy requirements and uncertainties in radiotherapy: a report of the International Atomic Energy Agency. Acta Oncol. 2017; 56:1–6. Google Scholar

227.

Moiseenko V, Banath JP, Duzenli C, Olive PL, Effect of prolonging radiation delivery time on retention of gammaH2AX. Radiat Oncol. 2008; 3:1–5. Google Scholar

228.

Alberts B, Kirschner MW, Tilghman S, Varmus H, Rescuing US biomedical research from its systemic flaws. Proc Natl Acad Sci USA. 2014; 111:5773–7. Google Scholar

229.

Errington TM, Iorns E, Gunn W, Tan FE, Lomax J, Nosek BA, An open investigation of the reproducibility of cancer biology research. Elife. 2014; 3. Google Scholar

230.

Stern W, Alaei P, Berbeco R, DeWerd LA, Kamen J, MacKenzie C et al. Achieving consistent reporting of radiation dosimetry by adoption of Compatibility in Irradiation Research Protocols Expert Roundtable (CIRPER) recommendations. Radiat Res. 2024; 201:267–9. Google Scholar

231.

Poirier Y, DeWerd LA, Trompier F, Santos MD, Sheng K, Kunugi K, et al. Minimum reporting standards should be expected for preclinical radiobiology irradiators and dosimetry in the published literature. Radiat Res. 2023; 200:217–22. Google Scholar

232.

Trompier F, DeWerd LA, Poirier Y, Dos Santos M, Sheng K, Kunugi KA, Winters TA, DiCarlo AL, Satyamitra M. Minimum reporting standards should be expected for preclinical radiobiology irradiators and dosimetry in the published literature. Int J Radiat Biol. 2024; 100:1–6. Google Scholar
Daniel Johnson, H. Harold Li, and Bruce F. Kimler "Dosimetry: Was and Is an Absolute Requirement for Quality Radiation Research," Radiation Research 202(2), 102-129, (2 July 2024). https://doi.org/10.1667/RADE-24-00107.1
Received: 11 April 2024; Accepted: 9 May 2024; Published: 2 July 2024
Back to Top