International Institute of Medicine and Science, Inc. 
Dedicated to the Preservation and Betterment of  Human Lives Through Advanced Medical Sciences Research, Education, and Service to the Communities we Serve and Humanity 
A Member of the World Association of Non-Governmental Organizations (WANGO)              
Your Subtitle text
        Several research programs are conducted at the International Institute of Medicine and Science, Inc. These are either in progress or completed or else in various stages of completion. Synopses of these programs follow:

        CRITICAL CARE MEDICINE:               
        Project 1: "Probing Small Vessels Microcirculation" * [Completed]
        Project Director: Dr. Alain L. Fymat                 

        During approximately the past fifteen years, the molecular revolution has taken place and the applications of molecular technology in medicine have multiplied. Being the smallest functional unit of the cardiovascular system and the site of interaction between blood and tissue, the microcirculation creates the critical delivery site which establishes the environment for cell function. The microvascular system is both the conceptual and factual connection between systemic circulation and cellular function and ultimately molecular biology. Accordingly, the microcirculation is the final link between the cardiovascular system and cellular and ultimately molecular processes.                       
        The microcirculation is clinically evaluated primarily on the basis of physical examination, including local temperature, swelling pain, and analyses of tissues or fluids, including entering and exiting blood flow (Joy and Weil, 1969). However, current clinical focus on the macrocirculation, including cardiac output, large vessel pressures and flows, and the fluid contained within these vessels (red cells, plasma) provides very limited insight into the mechanisms operating at the microvascular exchange sites and specifically the capillaries. Cellular processes within the tissues are therefore explained in part by actions at the terminal  delivery sites (Ince, 2002).
        The clinical implications are especially pertinent to the understanding of "distributive shock". Distributive shock represents the failure of mammalian blood circulation even though the macrocirculation, and specifically cardiac output and arterial pressure, are maintained at or near normal levels. In such clinical settings, there is evidence that the microcirculation is responsible for impaired tissue oxygenation and delivery of vital substrates in addition to oxygen, together with failure to eliminate products of metabolism including carbon dioxide and lactic acid (see, for example, Weil and Tang, 2007).              
        The diagnosis and monitoring of the microcirculation have wider clinical applications. Thus:
        (1) In the assessment of circulatory failure in patients in the ICU/ER, microcirculation impairment can provide
 a triage tool in the case of trauma patients, and an indicator of shock and sepsis in early goal-directed therapy. It can also be used as a sounding alarm of impending shock in cardiac surgery and for monitoring shock therapy;
        (2) In the assessment of heart function, it can provide a CPR resuscitation tool, identify conditions of low cardiac output, optimize cardiac assist devices, and monitor perfusion during cardiac surgery; 
        (3) In the assessment of tissue perfusion, it can serve to monitor wound healing, transplant surgery and plastic surgery; and 
        (4) Microcirculation can also be used as an indicator of diabetes, brain pressure/perfusion, and in pediatrics. 
        Current technology has improved the capability to directly view the microcirculation in vivo and relate it to the now commonly measured macrocirculation. Accordingly, opportunity has been opened to increasingly relate cellular and especially molecular processes to the terminal delivery system. Nonetheless, in contrast to advances in genetic and molecular biology. and more advanced understanding of organs and systems, we are still greatly limited in our capability for in vivo and real time visualization of the microcirculation.            
        We are therefore prompted to search for quantitative methods of imaging blood circulation within arterioles, capillaries and venules. Whereas a number of techniques have already been developed, a basic limitation is that only superficial layers of tissues and organs can be imaged so that the information they provide cannot be assumed to apply to the deeper layers of tissues and organs.  This project is interested in the search for any and all appropriate deep-tissue and organ probing techniques.         

* This project was initiated by ALF and conducted in part with the collaboration of Dr. Max Harry Weil at the Weil Institute of Critical Care Medicine, Rancho Mirage, California.               

        Project 2:"Imaging of Human Microcirculatory Abnormalities"* [Completed]              
        Project Director: Dr. Alain L. Fymat            

        Adequate microcirculation is vital for the transport of oxygen and other nutrients to organs and tissues. It takes place through the smallest vessels (less than approximately 100 microns in diameter), which consist of arterioles, capillaries, and venules. The main cell types involved are endothelial cells that line the inside of the microvessels, smooth muscle cells (mostly in arterioles), red blood cells, leukocytes, erythrocytes, and blood plasma components.             
        For any organ, the microcirculation appears to be specifically organized to serve the special needs of that organ. Distinctive microvascular pathologies are associated with different disease states (diabetes, hypertension, chronic heart disease, chronic ulcers, sepsis,...).             
        In health, the function of the microcirculation is to transport oxygen and other nutrients for necessary tissue oxygenation, thus determining organ function, and to ensure adequate immunological function. In disease, when not severely compromised, the microcirculation attempts to deliver therapeutic drugs to target cells. Fundamentally, the  microcirculation is the systemic/molecular link. As the smallest functional unit of the cardiovascular system and the site of interaction between blood and tissue, it creates the environment necessary for cell function. Analysis of the cardiovascular system pathology and pathophysiology gives a unique perspective to the disease process, thus providing the link between conventional hemodynamic measurements and the microcirculation which supplies the metabolic requirements for cellular function (see, e.g., Weil and Tang, 2007).              
        This project aims to unravel the mechanisms operating at the  microvascular exchange sites, specifically the capillaries, and the design of a class of in vivo, real time, imaging instruments to visualize the microcirculation.   

* This project was initiated by ALF and conducted in part with the collaboration of Dr. Max Harry Weil at the Weil Institute of Critical Care Medicine, Rancho Mirage, California.              

        Project 3: "Microcirculation Polarization Spectral Imager" [Completed]            
        Project Directors: Dr. Alain L. Fymat and Dr. Max Harry Weil            

        The author's spectral photopolarimeter is a medical device for the non-invasive, real-time dynamic imaging and characterization of blood microcirculation in healthy and diseased microvessels (capillaries, arterioles, venules). The instrument includes an objective optics, a tunable spectral filter, a polarizing beam splitter, an incident polarizer for the sub-beam reflected by the beam splitter, and either a plane mirror oriented at the brewsterian angle or, for a more general orientation of the mirror, a second incident polarizer identical to the first one for the sub-beam transmitted by the beam splitter. The polarized reflected and transmitted sub-beams together are then directed onto the tissue or organ to be imaged and characterized. Owing to the multiple scattering processes taking place in the tissue or organ, these beams are diffusely reflected and emerge in a state of partial polarization. They return to the beam splitter, are passed through a polarizer-analyzer that is orthogonal to the incident polarizer(s), collected by a receiving optics, and focused onto a solid-state detector (focal plane array) mounted within a camera system. Electronic and signal processing modules subsequently record and analyze the data (static images, video displays) to characterize the microvessels (diameter and distribution, internal flow velocity distribution) and their tissue perfusion capability. The data and the results of the analyses are forwarded for electronic storage and subsequent retrieval and telecommunication.             
        This project aims to develop the above concept into a clinical medical device for diagnostic and therapeutic purposes. It will provide the detailed engineering design for a prototype instrument to be built by others.

* This project was initiated by ALF and conducted in part with the collaboration of Dr. Wanchun Tang, Dr, Giuseppe Ristagno and Mr. Joe Bisera at the Weil Institute of Critical Care Medicine, Rancho Mirage, California.              

        Project 4: "Blood Microcirculation as a Homeostatic Process" [Completed]            
        Project Director: Dr. Alain L. Fymat              

        Homeostasis is defined as "the self-regulatory mechanism that allows organisms to maintain themselves in a state of dynamic balance with their variables fluctuating between tolerance limits" (Ref: Walter B. Cannon, "The Wisdom of the Body", Norton, New York, 1932; rev. ed., 1939). Viewed from a systems point of view, blood microcirculation is a living system that cannot be understood by (Cartesian) analysis in the sense that the properties of the parts (capillaries, venules, veins, arteries,...) are not intrinsic properties but can be understood only within the context of the larger whole. Unlike a closed system, which settles into a state of  equilibrium, it is an open system that operates, maintains itself  far from equilibrium in this"steady state" characterized by continual flow and change, and self-regulates.                         
        Feedback is the essential mechanism of homeostasis. It means the conveying of information about the outcome of any process or activity to its source. Thus, the concept of the feedback loop introduced by cyberneticists led to new perceptions of the many self-regulatory processes characteristic of life, including blood microcirculation. Feedback loops are ubiquitous in the living world because they are a special feature of the nonlinear network patterns that are characteristic of living systems. Cyberneticists distinguish between two kinds of feedback - self-balancing (or "negative") and self-reinforcing (or "positive") feedback.                       
        The key to a comprehensive theory of  the synthesis of living systems, including blood microcirculation, lies in those two very different approaches, the study of substance (or "structure") and the study of form (or "pattern"). Structure involves quantities, while pattern involves qualities.            
        This work studies blood microcirculation as a homeostatic nonlinear system that is open, and self-regulating with self-balancing feedback.            

        Project 5: "Modeling the Blood Microcirculation as a Closed Loop Catalytic Network" [Completed]  
        Project Director: Dr. Alain L. Fymat               

        Catalytic reactions are crucial processes in the chemistry of life. Whereas the most common and most efficient catalysts are the enzymes, which are the essential components of cells that promote vital metabolic processes, the catalysts of the microcirculation have not been definitely identified. Eigen observed that in biochemical systems far from equilibrium, i.e., systems exposed to energy flows, different catalytic reactions combine to form complex networks that may contain closed loops (Ref: M. Eigen, "Molecular Self-Organization and the Early Stages of Evolution", Quarterly Reviews of Biophysics, 4, 2&3, 149, 1971). These catalytic cycles are at the core of self-organizing systems. They also play an essential role in the metabolic functions of living organisms. Catalytic cycles tend to interlock to form closed loops in which the catalysts in one cycle act as catalysts in the subsequent cycle. "Hypercycles" are those loops in which each link is itself a catalytic cycle. Hypercycles turn out to be not only remarkably stable, but also capable of self-replication and of correcting replication errors. One of the most striking lifelike properties of hypercycles is that they can evolve by passing through instabilities and creating successively higher levels of organization that are characterized by increasing diversity and richness of components and structures.                    
        Maturana made two fundamental observations. First, he noted that "the nervous system operates as a closed network of interactions, in which every change of the interactive relations between certain components always results in a change of the integrative relations of  the same or of other components" (Ref: Humberto Maturana, "Biology of Cognition"; published originally in 1970, reprinted in Humberto Maturana and Francisco Varela, "Autopoiesis and Cognition", D. Reidel, Dordrecht, Holland, 1980).  This network pattern, in which the function of each component is to help produce and transform other components while maintaining the overall circularity of the network, is the basic "organization of the living". Next, Maturana postulated that the nervous system is not only self-organizing but also continually self-referring.            
        In this project, we adopt Maturana's views and extend them to the better understanding of the microcirculation.               

        Project # 6: "Fractal Geometry Representation of the Blood Microcirculation" [Completed]             
        Project Director: Dr. Alain L. Fymat              

        Fractal geometry is a powerful mathematical language to describe the fine-scale structure of chaotic attractors (Ref: Benoit Mandelbrot, "The Fractal Geometry of Nature", Freeman, New York, New York, 1983; first French edition published in 1975). The most striking property of these fractal shapes is their "self-similarity", that is, their characteristic patterns are found repeatedly at descending scales, so that their parts, at any scale, are similar in shape to the whole. The repeating branches of blood vessels may show patterns of such striking similarity that we are unable to tell which is which. Whereas it is impossible to predict the values of the variables of a chaotic system at a particular time, we can predict the qualitative features of the system's behavior. Similarly, it is impossible to calculate the length or area of a fractal shape, but we can define the degree of "jaggedness" in a qualitative way. It is indeed possible to define a number between 1 and 2 (so-called "fractal dimension") that characterizes the jaggedness of the microcirculation pattern. The more jagged the outline of the microvessels, the higher their fractal dimension.              
        In this project, we propose to show that the mathematical process of iteration performed thousands of time at different scales (so-called "fractal geometries" ) can produce computer-generated models that can bear an astonishing resemblance to the microcirculation.               

        Project # 7: "Epinephrine Reduces Cerebral Perfusion During Cardiopulmonary Resuscitation" [Completed]
        Project Directors:
Dr. Giuseppe Ristagno, Dr. Wanchun Tang, and Dr. Max Harry Weil             
        Project Co-Directors: Dr. Yun-Te Chang, Dr. Alain L. Fymat, Dr. Huang Lei, Dr. Shiije Sun, Carlos Castillo      
        (see: Critical Care Medicine Journal, Volume 37, Issue 4, April 2009)              

        Epinephrine has been the primary drug for cardiopulmonary resuscitation (CPR) for more than a century. The therapeutic rationale was to restore threshold levels of myocardial and cerebral blood flows by its alpha-1 and alpha-2 adrenergic agonist vasopressor actions. On the basis of coincidental observations on changes in microvascular flow in the cerebral cortex, we hypothesized that epinephrine selectively decreases microvascular flow.              
        A randomized prospective animal study was designed in which four groups of five male domestic pigs weighing 40+/- kg were investigated. After induction of anesthesia, endotracheal intubation was followed by mechanical ventilation. A frontoparietal bilateral craniotomy was created. Ventricular fibrillation was induced and untreated for 3 minutes before the start of precordial compression, mechanical ventilation, and attempted defibrillation. Animals were randomized to receive central nervous injections during CPR of (1) placebo, (2) epinephrine, (3) epinephrine in which both alpha-1 and beta adrenergic effects were blocked by previous administration of prozasin and propanolol, and (4) epinephrine in which both alpha-2 and beta adrenergic effects were blocked by previous administration of yohimbine and propanolol.                       
        Cerebral microcirculation blood flow (CMBF) was measured with orthogonal polarization spectral imaging. Cerebral cortical carbon dioxide and oxygen tensions (PbCO2 and PbO2) were concurrently measured using miniature tissue optical sensors. Each animal was resuscitated . No differences in the number of electrical shocks for defibrillation or in the duration of CPR preceding return of spontaneous circulation were observed. Yet, when epinephrine induced increases in arterial pressure, it significantly decreased PbO2 tension and increased PbCO2 tension.              
        Epinephrine therefore significantly decreased CMBF and increased indicators of cerebral ischemia. Reduced CMBF and magnified brain tissue ischemia during and after CPR were traced to the alpha-1 adrenergic agonist action of epinephrine. When the alpha-2 effects of epinephrine were blocked, reduced CMBF and tissue ischemia persisted. No differences in cardiac output, end tidal PCO2, arterial PO2 and PCO2, and brain temperature were observed before inducing cardiac arrest and following return of spontaneous circulation.                          
        In conclusion, in this model, epinephrine through its alpha-1 agonist action had adverse effects on cerebral microvascular blood flow such as to increase the severity of cerebral ischemia during CPR.     

* This project was conducted at the Weil Institute of Critical Care Medicine, Rancho Mirage, California.  

        LIFE SCIENCES:             

        Project # 1: "Systems Theory and Medicine" [Completed]              
        Project Director: Dr. Alain L. Fymat              

        General systems theory is an important means of controlling and investigating the transfer of principles from one field to another, and it will no longer be necessary to duplicate or triplicate the discovery of the same principle in different fields isolated from each other. At the same time, by formulating exact criteria, general systems theory will guard against superficial analogies which are useless in science" (Ref: Ludwig von Bertalanffy "General System Theory", Braziller, New York, 1968).              
        Thus, general systems theory would offer an ideal conceptual framework for unifying various scientific disciplines that had become isolated and fragmented such as medicine and science. Whereas this idea can yield a systemic conception of life, mind, and consciousness that transcends disciplinary boundaries and, indeed, holds the promise of unifying various fields of study that were formerly separated, the present work has deliberately limited itself to the  unification of some scientific and medical concepts.


        Project 1: "Brain Physiologic and Pathologic Conditions Studied with Magnetic Resonance  Diffusion-Weighted 
" [Completed]
        Project Director: Dr. Alain L. Fymat                       

        Diffusion-sensitive  magnetic resonance (MR) imaging techniques provide great insight into physiologic and pathologic brain conditions. The diffusion processes of molecular water protons on a micron scale can be accurately and non-invasively imaged in near-real time. This project attempts to set forth the main clinical applications of the technique.            
        The diffusion of water molecules is the random, microscopic, thermally-driven translational movement of water and other small molecules in a tissue ("Brownian motion") on a scale of  15 microns over a 40 millisecond time frame. Several cases must be considered:
        (1) Case of a pure liquid: The expected distance a molecule moves in time can be modeled as a random walk with a Gaussian distribution. In MR imaging, the motion of water molecules by diffusion through a magnetic gradient results in an irreversible signal loss through intravoxel dephasing, which depends on the details of the pulse sequence used. The conclusions are that:             
                (a) With increased diffusion  within a tissue, there is exponentially greater signal loss in the MR image;
                (b) The factor used to generate the Diffusion-Weighted (DW) imaging sequence (characterized by a so-called "b-factor") depends on the sequence used. It summarizes the gradient strength and timing employed, and represents how sensitive the sequence will be to diffusion effects. As it increases, signal loss decreases, and conversely, and effects from diffusion in the resulting image decrease. With increasing b-values, a trade-off occurs in that the amount of diffusion-weighting increases, while the signal decreases. While diffusion effects are present in routine MR imaging, causing signal attenuation of less than ~ 2%, special MR pulse sequences can be designed to provide images that are highly sensitive to diffusion effects. In such sequences, strong pulsed gradients are applied during evolution of the MR signal, which is generated either by the Spin Echo (SE) sequence or the Gradient Recall Echo (GRE) sequence usually employing an Echo-Planar Imaging (EPI) sequence.
        (2) T2-Shine-Through: In the simplified, modified Bloch Equation, there is a large T2-component of the DW image, especially since TE ~ 75-100 millisecond. This effect can be clinically significant, especially when abnormalities are extremely bright on T2-weighted images. It is at times difficult to determine whether a bright area on the DW image is due to real restricted diffusion or to T2 shine-through. Some clinical observations have shown in a group of stroke patients that T2-shine-through effects dominate the increased signal seen on DW images by the end of the first week after onset of symptoms. This points to the need for an Apparent Diffusion Coefficient (ADC).
        (3) Apparent Diffusion Coefficient (ADC) Map: In biological tissues, totally free diffusion usually does not occur as there are numerous restrictions such as cell membranes and other boundaries. ADC is therefore a more accurate representation of the measured diffusion constants within  biological tissues.
        (4) Artifacts: DW images are frequently degraded by artifacts from eddy currents, flow, and gross patient motion that manifest themselves as bright edges and streaks along the surface of the brain at the skull base. They result from the EPI sequences most commonly used.
        (5) Anisotropy: An isotropic medium is one in which molecules freely diffuse in all directions. Biological tissues, on the other hand, are heterogeneous due to the presence of cell membranes and, in the case of the brain white matter, myelin sheaths. Diffusion in such a medium is therefore a directionally-dependent phenomenon also known as "anisotropy" (that is, the rate of water diffusion varies with direction due to biological constraints). White matter tracts are known to be anisotropic due to myelinated pathways and they demonstrate markedly different signals depending on the direction of application of the diffusion gradient. They are bright (less diffusion) if the diffusion gradient is applied in a direction other than the orientation of the white matter fibers and dark if the gradient is applied along the axis of the white matter tract. 
        (6) Isotropic Image: An isotropic image, which minimizes the effects from the anisotropic nature of the biological medium, can be created by simply averaging the signal obtained from the gradients applied in the three orthogonal directions.
        (1) DW images are exquisitely sensitive to the direction of the axons in white matter tracts in the brain, a property known as"anisotropy";
        (2) Early detection and characterization of cerebral ischemia can be applied to stroke management;
        (3) Detection of ischemic areas of the brain where the infarcted tissue appears as a "bright light bulb". In regions of restricted diffusion, there is less attenuation of the MR signal, thereby leading to relatively brighter areas of the DW image. The "bright light bulb" region represents the cytotoxic edema in the infarcted tissue. The exact mechanism leading to the restricted diffusion that occurs within minutes after interruption of blood flow have not been completely elucidated. The best current theory involves the rapid breakdown of energy metabolism and ion exchange pumps, leading to large shifts of water from the extra-cellular space into the intra-cellular space. The intra-cellular space is more restricted than the extra-cellular space, and there is more restricted diffusion;
        (4) Differentiation of cysts from solid tumors; 
        (5) Measurement of deep body temperatures (inflammation, fever, edema); 
        (6) Aid in the diagnosis of inflammatory conditions such as encephalitis;
        (7) Evaluation of various white matter abnormalities (e.g., multiple sclerosis);
        (8) Effect of b-value: As b increases, the image becomes noisier and the white matter tracts are relatively brighter;
        (9) Infarcted tissue is bright on the DW image (b=1,000) and dark on the ADC map (b=0); and
        (10)  Changes in anisotropy may indicate disease activity or diseases of the white matter (e.g., multiple sclerosis, leukodystrophies).            

        Project 2: "Brain Microcirculation Studied with Magnetic Resonance Perfusion Imaging" [Completed]            
        Project Director: Dr. Alain L. Fymat               

        Perfusion is the flow of blood through the capillary circulation of an organ or tissue region quantified in terms of the flow rate normalized to the tissue mass over a time frame that is ~/> 20 times the diffusion time. It can be measured in vivo by monitoring a tracer that is either deposited within, or flows through, an organ or tissue of interest. Perfusion processes in the cerebral microcirculation can be imaged with good spatial and sub-second temporal resolution by observing either the time-resolved passage of a bolus injection of an MR contrast agent (blood is the diffusible tracer) or the transit of inverted (or "RF-tagged") water protons (endogenous blood water is the diffusible tracer). Perfusion imaging is an alternative to the more traditional and well-established nuclear medicine perfusion techniques.             
        There are two broad classes of perfusion imaging methods, both of which provide relative perfusion information : 
        (1) Dynamic Susceptibility Contrast (DSC): Tissue signal changes are monitored using an exogenous (injectable) MR relaxation contrast agent as a tracer (Gadolinium, Gd). Exogenous contrast agents include paramagnetic or super-paramagnetic agents that short the transverse relaxation times (T2*), paramagnetic agents that shorten the longitudinal relaxation times (T1), and true agents that shorten T2 relaxation times. Here, blood is the diffusible tracer, and the theory is based on the classical indicator-dilution theory.
        (2) Arterial Spin Labeling (ASL): This is a family of techniques. Tissue signal changes are monitored using an endogenous contrast agent, which is an inherent MR tissue contrast mechanism. Endogenous blood water is the diffusible tracer. By applying an appropriate series of RF-pulses, water protons in arterial blood can be magnetically "labeled" prior to their entry into the capillary blood. When these labeled water protons exchange with tissue water at the capillary level, they alter the magnetic properties of the tissue in a way that can be measured and translated into quantitative flow data. Cerebral perfusion is measured through the effect of labeled blood water on the magnetic properties of brain water at the level of capillary tissue exchange. This technique allows non-invasive quantitative measurement of flow.  
        (1) Ischemic but viable brain tissue, due to auto-regulatory vasodilatation, may demonstrate increased cerebral blood volume (CBV) relative to cerebral blood flow (CBF), which will be visualized on a (CBV/CBF) map (equivalent to a map of the mean transit time, MTT). Brain maps will show decreased CBF and increased MTT for given CBV in the infarcted brain;
        (2) In perfusion imaging of tumors, where the blood brain barrier (BBB) is often disrupted, the result is increased relaxation enhancement and decreased susceptibility effects. If these effects are not accounted for, perfusion may be underestimated;
        (3) Both DSC and ASL provide:            
                (a) Either relative or absolute perfusion information, the latter being technically very demanding
                regardless of the approach;            
                (b) DSC has the advantage of providing more information (CBF, CBV, MTT, various summary parameters) 
                whereas ASL generally provides only CBF;            
                (c) ASL is totally non-invasive;            
                (d) Both techniques have problems with extreme flow conditions, which tend to violate methodological 
                assumptions. Further, ASC assumes labeled blood water fully exchanges with tissue water at the capillary 
                level (no time under high flow rate conditions);          
                (e) Both techniques have multi-slice capability with limitation on the number of slices that can be imaged. 
                DSC is more limited than ASL.
        GENERAL NOTE: Diffusion and Perfusion Imaging techniques can, and often are, applied in combination.            

        Project 3: "Brain Microcirculation Studied with Functional Magnetic Resonance Imaging" [Completed]
        Project Director: Dr. Alain L. Fymat            

        Functional MR Imaging (fMRI) is a probe into brain functioning. It takes advantage of the changing signal within blood and surrounding tissue as the ratio of oxyhemoglobin-to-deoxyhemoglobin [oxy-Hb/deoxy-Hb) changes. Whenever oxygen delivered to the brain exceeds metabolic demands, the ratio [oxy-Hb/deoxy-Hb) increases, the MR signal loss due to deoxy-Hb-induced dephasing of spins is reduced, and the result is a relative decrease of signal in active areas of the brain.             
        The T2* relaxation rate of blood changes depending on whether or not Hb is bound with oxygen (oxy-Hb), in which case blood is slightly diamagnetic. However, when the oxygen is removed from the Hb molecule (deoxy-Hb), the Hb molecule becomes more paramagnetic due to more unpaired electrons. This increased paramagnetism leads to intravoxel dephasing of the MR signal and, thus, signal loss on T2*-weighted images due to deoxy-Hb in the blood. In other words: deoxy-Hb acts as an endogenous susceptibility contrast agent. As blood flows from the arterial to the venous side through the capillaries, there is some extraction of oxygen, yielding a decrease in (oxy-Hb/deoxy-Hb) ratio and, therefore, decreased signal (usually less than approximately 6% with 1.5 Tesla systems).             
        This project aims to study brain microcirculation with fMRI to provide complete brain maps with associated histograms of vessel size distributions and dynamics.   

        NUCLEAR MEDICINE:             

        Project # 1: "Organ Characterization with  Nuclear-Medicine Tomography"* [Completed]             
        Project Directors: Dr. Alain L. Fymat, Dr. Moses A. Greenfield and Dr. W. N. Paul Lee            

        Nuclear-medicine instruments in current use employ imaging and scanning techniques. Tomographic imaging scanners are usually classified into two general categories: "planar" tomographs, where the image corresponds to the distribution of radioactivity in longitudinal planes parallel to the head-to-toe axis of the patient, and "transverse" tomographs, yielding an analogous image in cross-sectional planes perpendicular to all planes containing the head-to-toe axis. Thus, for example, well-known devices such as the positron camera of Brownwell & Burnham and the  scanner of Anger are plane tomographs. Kuhl & Edward's scanner and the newer ECAT (emission computed axial tomography) and PET (positron-emission tomography) axial scanners are transverse tomograph devices. The instrument of Brownwell & Burnham, the ECAT, and the PET utilize the property of positron-emission radionuclides, which yield gamma-radiation from annihilation of the positron and the electron essentially at the site of the radionuclide. The radiation consists of  highly energetic gamma-photons (511 keV) ejected 180 degrees apart, thus requiring opposite nuclear counters for their detection.            
        With the increased knowledge of the biologic effects of ionizing radiations, and the parallel increased applications of tracer techniques, there has  been growing concern regarding post-treatment effects of exposure to ionizing radiations and the possible parallel deleterious after-effects of repeated exposure to low doses of radiations. However, there currently exists no documented evidence of established  association between such effects and the use of radiopharmaceuticals in, for example, the diagnoses of thyroid carcinoma. Nonetheless, the radiation exposure to the organ is cumulative, and the physicochemical and medical communities are endeavoring to reduce the radiation dose while at the same time preserving the diagnostic value of nuclear-medicine techniques. The evolution of radiopharmaceuticals developments in preparing short-lived radioisotopes and biological material labeled with these isotopes offers new alternatives to traditional approaches. However, when coupled with new concepts in nuclear-medicine instrumentation technology, further beneficial advantages could be gained in the areas of radiation-dose reduction and enhanced diagnostic accuracy.              
        In the case of the thyroid, for example, the continuing use of the iodine radioisotope I-131 represents one of the larger increments of radiation dose delivered by  nuclear-medicine applications. Its diagnostic value is also impaired by the inaccuracy of conventional measurement techniques because of background radiation from surrounding tissues and blood flow, and the required correction for attenuation due to the presence of intervening tissues. Thus, in current practice, critical corrections for extra-thyroidal neck activity, and for radiation attenuation by the intervening tissues between the thyroid and the neck surface, are required prior to determining the percentage of radioiodine uptake by the thyroid.            
        This project provides an analytical method that enables the organ depth in the body, the absolute radioactivity of this organ, and the radioactivity of the surrounding tissues to be measured concurrently over the duration of the test. Further, it gives an analytical method that enables the physiologic functions of uptake, retention, and excretion of radioactive-labeled substrates by the organ, and their relative equilibrium regime, to be assessed concurrently. Still further, it yields an analytical method for obtaining information about the fractional cardiac output to the organ. Lastly, it promotes the development of methods for reducing the dosage of radiopharmaceuticals required in nuclear-medicine procedures.             
        The method performs in vivo analyses of the time-varying absolute radioactivity of selected human organs, following intravenous injection or oral administration of a radiopharmaceutical. It may be employed to quantitate the absolute activities of a physiologically important radioactive isotope within an organ over a predetermined test period. The time-activity can in turn be used to analyze physiological factors - that is, the functions and activity of an organ, including the physical and chemical processes involved, so as to give the regional blood flow to the organ, and the metabolic function of the organ  as represented by its organification of the radioactively labeled drug. The method can also be used to determine the morphological factors (that is, the form and structure) of an organ, such as its depth within the patient. It is directly applicable, for example to the diagnosis of diseases of the thyroid, the liver, and the kidneys, and to the localization of tumors using labeled antibodies. The method is based on the recording, analysis, and interpretation of photon emissions (gamma-, x-, and coincidence photons) observed during specific tomographic scans of the organ of interest. Its practical implementation is disclosed in U.S. Patent 4,682,604, dated 28 July 1987, granted to the author and his collaborators.  

* This project was initiated by ALF and conducted with the collaboration of Dr. Moses A. Greenfield at the University of California at Los Angeles, Center for the Health Sciences, School of Medicine, Department of Radiological Sciences , and with the collaboration of Dr. W. N. Paul Lee at the UCLA-Harbor Medical Center, Research and Education Institute, Inc., Clinical Science Center and Division of Pediatric Endocrinology, Torrance, California.            

        Project # 2: "Absolute Radioassay of Extended Sources: An Equivalent Point-Source Coincidence Technique 
        with Application to the Thyroid
"* [Completed]            
        Project Directors: Dr. Alain L. Fymat, Dr. Moses A. Greenfield and Dr. W. N. Paul Lee            

        During the last two decades, the classical coincidence-counting method of nuclear physics has been successfully applied to many clinical investigations. For example, it has been particularly valuable in studies of thyroid disease and metabolism using radioiodine. In vivo measurements of radiocobalt have also been of interest in investigations of vitamin B12 absorption, retention, and accessibility in the body or in selected organs such as the liver and in health physics. The important advantage of the method over previously used radioassay techniques is its ability to provide the absolute activity directly without reference to a phantom. Moreover, under the common (albeit unrealistic) assumption that the injected tissue or organ can be represented by a point source of radioactivity, it reduces the data analysis to a straightforward procedure that could readily be implemented for routine use.              
        A point source, however, is a mathematical idealization rarely encountered (if at all) in clinical practice. Some concern exists, therefore, as to the accuracy of the activity so derived. The measurement of activity in an extended source differs from that of a point source in several respects. First, an extended source is essentially a geometric distribution of point sources for which the detector efficiency may not be a constant (so-called geometric effect). Second, the radiations emitted by certain of these point sources may be reabsorbed and/or scattered by neighboring points (self-attenuation effect), thus altering the detector efficiency for any given points. For small source-to-detector distances (SDDs), we expect the geometric effect to predomimate with a tendency to overestimate the activity, while at large SDDs this dominance would revert to the self-attenuation effect with a concomitant underestimation of the activity. At intermediate distances, the two effects would play a somewhat similar role. In principle, therefore, a theory of the absolute activity for extended sources should incorporate both effects. The question confronting us, therefore, is whether the recorded activity depends on the detailed processes describing the two effects or simply on their integral result. Third, an extended source would exhibit an inhomogeneous activity distribution, e.g., hot or cold spots of activity (nodules) can be observed in thyroid scans. While the detectors employed integrate the  activity over the thyroidal volume and, hence, may not be sensitive to the details of the activity distribution, the integrated activity nonetheless may not correspond to any single point on the thyroid.               
        This work develops the hypothesis that the integral effect is the relevant one, and that it is merely sufficient to replace the extended source by an "equivalent" point source provided the two sources yield the same measured activity. The notion of equivalence is attractive in that it allows for the finite size and inhomogeneity of the source while at the same time retaining the conceptual and analytical simplicity of the point-source treatment. It also offers the advantage of representing more realistically the experimental situation. A unifying framework is provided for the coincidence-counting method in the case of an exact point source; the equivalent point source method is developed, and effects of additional summing, photon correlation, and detector geometry are also analyzed. While these developments are of a general  nature, for illustrative purposes, reference is often made to iodine  uptake by the thyroid.              
        In summary, a general methodology is provided for the absolute assay of radioisotopes decaying with coincident photons in an extended source. In the determination of the source activity, the method requires neither the detailed consideration of the geometric and self-attenuation processes taking place between the source component points nor a knowledge of the distribution of activity across the source. It derives from the concept of  "equivalent point source", that is a fictitious point source whose activity would equal that measured for the actual extended source. It has been developed for an arbitrary number of coincident photons types displaying any arbitrary degree of mutual correlation, and for arbitrary detection geometry. A unifying formalism has been developed for both point and extended sources and for single and dual detecting systems. It is found that, in all cases, the various instrumental and spectroscopic uncertainties appear within a composite parameter that can be determined by standard calibration procedures; this fact is, in turn, only weakly dependent on its own component parameters. New expressions and relationships are obtained that provide a greater physical insight into coincidence-counting methods.

* This project was initiated by ALF and conducted with the collaboration of Dr. Moses A. Greenfield at the University of California at Los Angeles, Center for the Health Sciences, School of Medicine, Department of Radiological Sciences , and with the collaboration of Dr. W. N. Paul Lee at the UCLA-Harbor Medical Center, Research and Education Institute, Inc., Clinical Science Center and Division of Pediatric Endocrinology, Torrance, California.             

        Project # 3: "Optical Principles of Coincidence Counting Imaging in Nuclear Medicine" [Completed]
        Project Directors: Dr. Jack I. Eisenman and Dr. Alain L. Fymat             

        Coincidence counting in nuclear physics is a well-established method, which has been successfully applied to many nuclear medicine clinical investigations. Such investigations include: (1) Studies of thyroid disease and metabolism utilizing radioiodine;(2) Investigations of vitamin B-12 absorption, retention, and accessibility in the body, or in selected organs such as the liver, employing in vivo measurements of radiocobalt; and (3) Health physics assessments.              
        The basic equation of the coincidence counting method in the case of a single detecting crystal (its generalization to two or more detectors has been provided by Fymat, Greenfield and Lee, 1985) is phenomenologically analogous to the interference equation of wave optics. This similarity should perhaps not be too surprising, considering the long history of the application of optical principles to the diffraction of X-rays (see, for example, James, 1948; and Fymat and Eisenman, 1990).  
        The formal similarity between optical interference and nuclear coincidence counting is demonstrated. The optical interference equation is derived from the vantage point of "partial coherence theory", and the coincidence counting equation is generalized to partially coherent emissions. Similarity is demonstrated, but not identity. An antithetic relationship between the two disciplines (optics, nuclear physics/medicine) is established, which is in keeping with the very nature of the two fields. In imaging language, interference could be viewed as a "positive" contrast whereas coincidence counting could be viewed as a "negative" contrast.             
        Analogies between apparently unrelated scientific disciplines have historically proven to be fruitful in the advancement of the state of such fields. It is with this historical lesson in mind that this work attempts to correlate nuclear medicine and optics methodologies. It will derive therefrom an enhanced knowledge, and perhaps added clinical information, from the methods employed.

* This project was initiated b y ALF and conducted in part with the collaboration of Dr. Jack I. Eisenman at the University of California at Los Angeles, Center for the Health Sciences, School of Medicine, Department of Radiological Sciences, and at the Charles R. Drew University of Medicine and Science, Postgraduate Medical School, Department of Radiology, Los Angeles, California.              

        Project # 4: "A New Phenomenon in Medical Imaging: Effects of Radiation Coherence on Spatial Resolution"* 
        Project Director: Dr. Alain L. Fymat             

        Spatial resolution in medical imaging has been mainly concerned with the ideal case of monochromatic, incoherent radiation emanating from a point source. Neither of these assumptions is realized in practice. Thus, a physical source is neither strictly monochromatic (even the sharpest spectral line has an inherent finite width; analogously, even the most discrete nuclear emission has a finite energy breadth), nor a point source (even the smallest radiator consists of very many elementary radiators or atoms),  nor incoherent (the radiation vibrations at two neighboring points are partially correlated).               
        Four concepts are here important:
        (1) In the Fourier representation of the radiation emission field, the ideal point source is visualized as an infinitely long wave train, which, at any distant point, produces a regular vibration of constant amplitude whose phase varies linearly with time. The real source is viewed as the sum of several such wave trains, and produces an irregular amplitude- and phase-varying composite quantity;
        (2) The partial coherence state of the radiation field is characterized by a coherence time, a coherence length, and a coherence region. The amplitude of the composite variation will remain essentially constant only over a time (or length) interval which is small compared to the coherence length (or time);
        (3) For two separated points illuminated by a quasi-monochromatic radiation, the coherence will depend on the distance of separation between the points. For a small distance, the fluctuations at the two points will be essentially the same, and for larger distances, some correlations between the fluctuations will exist provided the distance does not exceed the coherence length; and 
        (4) To describe adequately a wave field produced by a finite polychromatic source, it is desirable to introduce some measure for the correlation that exists between the vibrations at different points in the field. When the correlation is perfect, the vibrations are said to be completely coherent. Conversely, the vibrations are completely incoherent when they are uncorrelated. In general, neither of these situations is realized and we may speak of partially coherent vibrations.            
        The above situation requires us to look anew at the ways in which spatial resolution is defined, measured, and assessed in the medical literature. This work brings attention to the importance of partial coherence in the assessment of spatial resolution of medical images. Such a consideration may also have an important bearing on the design of medical imagers. It is shown that the state of phase coherence of nuclear radiation emission is an important new consideration in the definition, measurement, and determination of the spatial resolution of medical images. It is also relevant in the design, analysis, and performance prediction and evaluation of medical imagers. "Phantom" studies are used to demonstrate that the state of coherence of the radiation may vary from complete coherence to partial coherence to total incoherence as deeper tissue/organ structures are imaged. Substantiation and corroboration of these theoretical and experimental conclusions are found in classical optical diffraction and imaging, and in nuclear diffracting processes. These conclusions have been used in the project "Small Vessels Microcirculation Imaging" described above where coherence is considered through the equivalent process of polarization.

* This project was initiated by ALF and conducted in part with the collaboration of Dr. Jack I. Eisenman at the University of California at Los Angeles, Center for the Health Sciences, School of Medicine, Department of Radiological Sciences, and at the Charles R. Drew University of Medicine and Science, Postgraduate Medical School, Department of Radiology, Los Angeles, California.                 


        Project # 1: "Sphere of Smiles"  [In Progress]            
        Project Director: Liliane M. Bazerghi            

        Project # 2: "Multicultural Ecopsychology"  [In Progress]             
        Project Director: Liliane M. Bazerghi                

        This project is concerned with a new understanding of life at the "ecosystem" level. Following the Norwegian philosopher Arne Naess (~ 1970s), it distinguishes between  so-called "shallow ecology", which is anthropocentric (or human-centered), and "deep ecology", which is naturocentric in that it recognizes the intrinsic value of all living beings and views humans within this larger natural environment. Whereas ecobiology deals with the relations between living organisms and their environment, and ecosociology  deals with the relationship between the distribution of human groups with reference to material resources and the consequent social and cultural patterns, ecopsychology deals with the ecological self that is the connection between an ecological perception of the world and the corresponding behavior. 
        This work looks at ecopsychology from a multicultural viewpoint.            

        Project # 3: "A Psycho-Cybernetic Model of Positive Thinking" [In Progress]            
        Project Directors: Liliane M. Bazerghi and Dr. Alain L. Fymat             

        The first cyberneticists (Norbert Wiener, John von Neumann, Claude Shannon, Warren McCulloch, and others) set themselves the challenge of discovering the neural mechanisms underlying mental phenomena and expressing them in explicit mathematical language. Their original intention was to create an exact science of the mind. Although their approach was quite mechanistic, concentrating on patterns common to animals and machines, it involved  many novel ideas that exerted a tremendous influence on subsequent systemic conceptions of mental phenomena. All the major achievements of cybernetics originated in comparisons between organisms and machines, in other words, in mechanistic models of living systems.                 
        The contemporary science of cognition, which offers a unified scientific conception of brain and mind, can be traced back directly to the pioneering years of cybernetics. In particular, Bateson pioneered the application of systems thinking to family therapy, developed a cybernetic model of alcoholism, and authored the double-blind theory of schizophrenia, which had a major impact on the work of R. D. Laing and many other psychiatrists. However, Bateson's most important contribution to science and philosophy may have been the concept of mind, based on cybernetic principles, which he developed during the 1960s. This revolutionary work opened the door to understanding the nature of mind as a systems phenomenon and became the first successful attempt in science to overcome the Cartesian division between mind and body.               
        However, the cybernetic machines are very different from Descartes' clockworks. The crucial difference is embodied in Wiener's concept of feedback. A "feedback loop" is a circular arrangement of causally connected elements, in which an initial cause propagates around the links of the loop, so that each element of the cycle has an effect on the next, until the last "feeds back" the effect into the first element of the cycle. The consequence of this arrangement is that the first link ("input") is affected by the last ("output"), which results in self-regulation of the entire system, as the initial effect is modified each time it travels around the cycle. Feedback is thus "the control of a machine on the basis of its actual performance rather than its expected performance". In a broader sense, feedback has come to mean the conveying of information about the outcome of any process or activity to its source.              
        The "cyberneticists" distinguish between two kinds of feedback: "self-balancing" (or "negative") and "self-reinforcing" (or "positive") feedback. The former had been known for hundreds of years in common parlance as the "vicious circle". This expressive metaphor describes a bad situation leading to its own worsening through a circular sequence of events. There are other common metaphors to describe the self-reinforcing feedback phenomena: the "self-fulfilling prophecy", in which originally unfounded fears lead to actions that make the fears come true, and the "bandwagon effect" or the tendency of a cause to gain support simply because of the growing number of its adherents.             
        In this project, we aim to model positive thinking as a self-reinforcing feedback loop. 

        RADIATION ONCOLOGY:               

        Project # 1: "Invariant Imbedding and Radiation Dosimetry"* [Completed]             
        Project Director: Dr. Richard Bellman            
        Project Co-Directors: Dr. Alain L. Fymat, Dr. S. Ueno, and Dr. R. Vasudevan              

        In many physical and medical phenomena, we meet with the problem of studying the characteristics of a system by its responses to probes. This is the type of inverse problem we want to study if we desire to inquire into the nature of the sources inside the medium from the details of the emerging intensities of radiation from a given medium. This aspect is particularly relevant in radiotherapy and diagnosis. For example, in the brain tumor or any malignant growth in the skull, a radioactive isotope like Tc-130 is injected into the blood vessel to be selectively absorbed by the abnormal growth. From a study of the emergent radiation, if we can learn some information of the growth inside like its location, size, etc., we will be fashioning an effective diagnostic tool.            
        As a first step towards this end, this project considers the case of a homogeneous, isotropically scattering target slab containing an internal plane emitting source and studies the inverse problem of determining the distribution of the internal emitting source by measuring the angular distribution of the intensity of radiation emergent from the slab.                 
        A distribution of the internal source is assumed and the intensity of radiation emerging from the surface is computed by the method of order-of-scattering. By the use of the quasi-linearization technique, the parameters pertaining to the internal plane source are calculated.

* This work was conducted by RB at the University of Southern California, Departments of Electrical Engineering, Mathematics, and Medicine, Los Angeles, California, with the collaboration of ALF at California Institute of Technology, Jet Propulsion Laboratory, Pasadena, California and University of Southern California, Department of Electrical Engineering, Los Angeles, California; with the further collaboration of SU at Kyoto University, Kyoto, Japan and University of Southern California, Department of Electrical Engineering, Los Angeles, California; and with the collaboration of RV at Institute of Mathematical Sciences, Madras, India and University of Southern California, Department of Electrical Engineering, Los Angeles, California.              

        Project # 2: "Optimization of the Radiotherapy Treatment Plan"* [Completed]             
        Project Director: Dr. Alain L. Fymat            

        The problem posed to the radiation oncologist is to cure the patient or, at least, relieve him of the severe symptoms produced by the disease. At the same time, the treatment should not cause unacceptable damage to other organs or to adjacent healthy tissues. Thus, in planning the treatment, the objective is to reach the best trade-off between often conflicting requirements. This project aims to develop a methodology capable of providing such a trade-off. Initially, considerations are limited to the simple but practically important case of a single radiotherapy procedure: external irradiation of an internal tumor using a single photon (or electron) energy. Ultimately, of course, one may need to consider hybrid procedures involving a variety of photon and electron energies, a combination of external and intracavitary radiations, and even the combined use of several forms of therapy (e.g., radiotherapy, chemotherapy, brachytherapy, hyperthermia).             
        There are certain basic implicit assumptions in the practice of radiotherapy, namely:
        (1) Tumor control and complications in normal tissues and organs are radiation dose-dependent and dose-related phenomena;
        (2) The sites of radiation-related complications are known to a certain extent. Moreover, there are clinical guidelines regarding the relationship between the clinical state of each site, the stage and extent of the tumor, and the likelihood and extent of complications;
        (3) The dose tolerance for any form of radiation (external, intracavitary, or both) and any radiation regimen is approximately known. Thus, the dose-response curves for normal and diseased organs and tissues are also known to the same extent;
        (4) The dose and the dose configuration require a multiplicity of applied sources or beams to conform to the stage and extent of the disease;
        (5) The target volume occupied by the tumor-bearing tissues can be accurately specified.            
        The conceptual approach to the treatment is then to deliver an adequate dose to the tumor-bearing region relative to the surrounding tissues and structures so as to maximize the probability of tumor control while, at the same time, minimizing the probability of complications.                
        There are two main factors of importance in the context of achieving tumor control within tissue tolerance limits:    
        (1) Dose and its fractionation; and
        (2) Time, duration of each treatment, and interval between treatments.            
        From the above assumptions, approach, and factors, certain criteria have evolved for characterizing the ideal dose distribution. These include essentially: 
        (1) Uniform dose distribution throughout the tumor volume (encompassing a margin surrounding the actual tumor): and
        (2) Minimal dose to vulnerable regions. 
        Other more detailed criteria have been elaborated by Hope et al (1967):
        (1) Minimized dose gradient across the tumor;
        (2) Maximized tumor dose relative to the largest incident dose;
        (3) Minimized integral dose; 
        (4) Matched shapes of treated area and area indicated for treatment; 
        (5) Minimized dose to vulnerable regions; and(6) Adequate dose to lymphatic spread or regions of possible direct extension.            
        However, while the radiation treatment problem has been defined, its constraints identified, and criteria set forth for the idealized density distribution, the related problem of optimizing the treatment plan has not yet been satisfactorily resolved in spite of the fact that several methods have been propounded. This project explores the reasons for this situation and, at the same time, develops a new optimization strategy in the case of single and multiple beam variables, which seems particularly suitable for adaptation to any existing treatment plan. This method has been illustrated in a clinical case study. 

* This project was initiated by ALF and conducted in part with the collaboration of Dr. Moses A. Greenfield and Dr. S. C. Lo at the University of California at Los Angeles, Center for the Health Sciences, School of Medicine, Department of Radiological Sciences, and with the collaboration of Dr. David Findley at the City of Hope National Cancer Center in Duarte, California.  


        Project # 1: "Grading Renal Artery Stenosis"* [Completed]              
        Project Director: Dr. Peter Dure-Smith            
        Project Co-Directors: Dr. Alain L. Fymat, Dr. R.D. Block and Dr. P.R. Chang              

        The grading of renal artery stenosis (RAS) is currently a subjective evaluation by the radiologist. The aim of this project is to assess the accuracy in grading RAS by helical computed tomography angiography (CTA) as compared with digital subtraction angiography (DSA), and to further assess the utility of CTA as a screening tool for RAS.                       
        Nine nephrology patients presenting with uncontrolled HBP (suspected to be caused by RAS) were evaluated with DSA. After being consented, these patients were first examined with CTA and subsequently with DSA for confirmation of the CT diagnoses.             
        In a blind study of the 9 patients , three experienced radiologists (other than the investigators) from the Loma Linda University, School of Medicine, were asked to evaluate the renal arteries (left and right) in the images obtained with CTA -- including axial scans (AS),   curved reformations (CR), maximum intensity projections (MIP), and surface-shaded displays (SSD) -- and DSA. The radiologists were asked to evaluate the: Quality of the study; Degree of RAS (left and right renal arteries); Presence of post-stenotic dilatation; Collateral blood supply; Size of kidneys and quality of nephrogram; Site of any calcifications; and Likely etiology of any demonstrated RAS.              The (left and right) arteries were graded for the presence and degree of RAS according to the following schedule: Grade 1: Less than 50% stenosis; Grade 2: 50% - 75% stenosis; Grade 3: 75% - 90% stenosis; and Grade 4: Complete occlusion.            
        The investigators had shown previously the close correlation between the three types of CTA images and DSA (see: Radiology Supplement 205{P}. 572, 1997). The results of this extended investigation were the following: Generally, good correlation was obtained between CTA and DSA grading, even with career interventional radiologists who acknowledged bias towards CTA as the "gold standard" and having less experience with the newer CTA imaging; Differences in stenosis grading between the different CTA reconstruction images, and between CTA and DSA images, are essentially negated by the lack of exactitude using either linear measurements or "eyeballing" the stenosis; CTA is a very useful screening tool for RAS; CTA should be preferred over invasive DSA; and experienced radiologists may find it equally or more accurate to "eyeball" the degree of stenosis rather than to measure it.             
        Recommendations are: A standard and a methodology are needed for the grading of RAS; A multi-center prospective study utilizing the above standard, and sanctioned by the appropriate professional organizations (SUR, SCVIR, RSNA, and ACR) should be undertaken to compare the results from the several available modalities (DSA, CTA, MRI, ultra-sound, and radionuclide).                     
         The above study should include: A methodology specific to each modality; Patient selection and exclusion criteria; and a data analysis methodology for comparing the results from the different modalities.          

* A collaborative part of this work was conducted by ALF at the Loma Linda University Medical Center, School of Medicine, Department of Radiology and at the U.S. Department of Veterans Affairs, Loma Linda Medical Center, Loma Linda, California. It was sponsored by the U.S. Department of Veterans Affairs, Veterans Health Administration, Washingon, D.C.            

        Project # 2: "World Wide Web: The Ultimate Picture Archive, Communication and Teleradiology System
"* [Completed]            
        Project Director: Dr. Alain L. Fymat            

        Computed Tomography (CT) was first introduced into clinical practice in 1972 with the celebrated demonstration by Hounsfield at the Annual Congress of the British Institute of Radiology. Since then, the dramatic improvements in radiological imaging have stemmed from two independent components, the ability to digitize the raw data and its subsequent storage and manipulation. This applies not only to images generated by X-radiation but also to magnetic resonance (MR), ultrasound (US), and nuclear medicine (NM) images, the latter category of images including positron emision tomography (PET), single photon emission computed tomography (SPECT), and gamma camera images. The electronic transfer of raw data or developed images, either locally or remotely, is relatively a recent development.            
        Even though there is now a mature technology to digitize successfully an entire Radiology Department and review the corresponding images on workstation display monitors (except perhaps mammography to the required spatial resolution), most Departments are still based on analogue films. Paradoxically, many of these films originate from digital data: CT, digital subtraction angiography (DSA), MR, US, and NM. This slow conversion to digital radiology has been largely because of the high cost of old equipment (fluoroscopy, analogue angiography, older US units), and installation of an electronic transmission network within and beyond the hospital. It has so far been difficult to justify this cost in either improved efficiency or net savings.           
         An unexpected player in converting to a digitized imaging department is now emerging with the meteoric rise of the World Wide Web (WWW). It is interesting to observe that this immature upstart is now capable of offering not only improved services but substantial cost savings compared with a conventional PACTS.            
        This project aims to recommend a technology path for the development of a PACTS and to evaluate the different technologies making up a PACTS, a computerized information system for the management and communication of medical images locally (PAC component of the system) and over longer distances (T or teleradiology component ): Image capture devices; Soft-copy output devices; Hard-copy output devices; Storage/archive devices; Converter devices; Image transmission networks; Network hardware devices; Network software devices; and Radiology Information Systems (RIS)/Hospital Information Systems (HIS).              
        The above list of components makes up a complicated PACTS system. Fortunately, however, a convergence is taking place between the DICOM Communication Standard, WWW technology (Java-based programming tools, HTML, TCP/IP) and the Internet (including the derived concepts of  Intranet and Extranet). Such a convergence is making it possible to substantially reduce the complexity and cost of the PACTS and, at the same time, enhance its capability. A powerful new system results, which enables multiple users to share simultaneously and securely radiological images and associated reports anytime, anywhere, from any computer. The spatial- and contrast-resolution of the received images will be dictated only by the technical characteristics of the computer display monitors.            
        Management functions may include: access, acquisition, storage/archive, retrieval, display and processing of images. Communication functions may include local, regional or global distribution of images at various transmission speeds that vary with the number of bits of data sent per unit time along the data transmission lines (so-called "bandwidth"). The traditional approach to PACTS has been based on the premise that such a system is application-driven and must therefore be custom-designed to the users' needs. The consequence is that the institution-wide implementation of such a system (or a portion of it) at any medical center may represent a resource commitment of several million dollars. This cost can escalate when infrastructure upgrades such as those described are needed. However, while there is much truth to the previous dictum, it is possible to considerably reduce that cost, and at the same time enhance the capability of the system and that of the Radiology Department, by resorting to off-the-shelf Internet hardware and software technology. A powerful system results which enables the sharing of images and information anytime, anywhere, from any computer.

* This project was conducted by ALF at the U.S. Department of Veterans Affairs, Loma Linda Medical Center, Loma Linda, California. It was sponsored by the U.S. Department of Veterans Affairs, Veterans Health Administration, Washingon, D.C. 

        GERIATRICS AND GERONTOLOGY:             

        Project # 1: "Combating Human Aging and Senescence"* [Completed]            
        Project Director: Dr. Alain L. Fymat             

         "Aging", the process of growing older from birth onward, is the collection of the early stages of the various age-related diseases. It proceeds in a downward spiral such that the more we age, the more our self-repair functions decline and the less able our body is to stop aging. Thus, we age faster and faster! On the other hand, "senescence", the process of bodily deterioration that occurs in older ages, is manifested in an increased susceptibility to many diseases and a decreasing ability to repair damage.                         
        In a world  in which all causes of premature death would have been eliminated , so that all deaths result from the effects of aging, we would live hearty, healthy lives until, approximately age 85, we would nearly all die. By contrast, eliminating the effects of senescence, so that death rates do not increase with age but remain throughout life at the level of 18-year olds, say about 10 per 1000 a year for young adults in India in 1990, some people would still die at all ages but half the population would still live to age 300!                  
        Research on senescence seems to be discovering the value of an evolutionary point of view. Gerontologists are realizing that the mechanisms that cause senescence may not be mistakes but compromises carefully wrought by natural selection. An evolutionary view suggests that more than a few genes  are involved in senescence and that some of them have functions crucial to life. These genes express their various effects in a seemingly coordinated cluster of escalating genes, because any gene whose deleterious effects occur earlier than those of other genes  will be selected against the most strongly. Selection will act on it and other genes to delay its effects until they are in synchrony with those of other genes that cause senescence.                         
        Since aging in laboratory animals has been successfully postponed, an extension of a maximum lifespan of ~ 20%, we posit a similar or greater result is likewise achievable in humans. The purpose of this research project is the minimization of senescence and possible extension of maximum lifespan.                 
        We propose to combat aging and senescence by vigorously pursuing one or more of the following three approaches: (1) Caloric restriction (CR) and its genetic emulation; (2) Interference with metabolic processes (IMP) to lessen damage; and (3) Alleviation of the molecular damage (AMD) itself.                 
        CR is the only non-genetic intervention known to slow down aging in mammals . It lowers the generation of mitochondrial free radicals, toughens their membranes against the free radicals' assault and, above all, reduces the age-related accumulation of mitochondrial DNA mutations. Free radical damage outside the mitochondria is not a directly important cause of aging. CR slows down aging , yet has no consistent effect on the levels of most self-produced antioxidant enzymes. It has been established that there is a fixed degree of life extension (2-3 years according to some, 20-30 years according to others) that can be achieved by manipulating the nutrient sensing pathway - whether by CR or by drugs that trick the body into thinking it is being starved, or else by genetic changes that flip the same "switch".                
        IMP requires a clear understanding of the various metabolic disruptions that cause aging, and those that are effects (or secondary causes) that would simply disappear if the underlying primary causes were addressed. Despite considerable effort, progress has been extremely slow owing to the myriad of interacting processes that contribute to aging damage.                
        AMD caused by aging does not require a complete understanding of all the myriad interacting processes that contribute to aging damage. The real issue is not which metabolic processes cause aging damage in the body, but the damage itself, that is the molecular and cellular lesions that impair the structure and function of body tissues. Aging should, in principle, be just as amenable to modulation and eventual elimination as specific diseases are. The design of therapies should therefore focus on the damage itself and on ways to alleviate the accumulation of damage. The field of candidate causes of aging can thus be narrowed to the following (Ref: Aubrey de Grey and Michael Rose, "Ending Aging"): 
        (1) Mutations accumulation (disruption of the cellular biochemistry by increasing oxidative stress) including: chromosomal (cause cancer, which is predominantly a consequence of aging); mitochondrial; glycation (warping of proteins by glucose); amyloid accumulation; and intra-nucleus that cause cancer (because non-cancer mutations accumulate too slowly to matter in a normal lifetime);  
        (2) Intra-cellular aggregates (lipofuscin); 
        (3) Extra-cellular aggregates (beta-amyloid, transthyretin, and other substances of the same general sort); 
        (4) Cross-links outside cells; 
        (5) Cell defects (death resistance; loss; atrophy; senescence, which produces chemical signals that are dangerous to neighboring cells; and depletion of stem cells, which are essential to healing and maintenance of tissue; 
        (6) Nuclear epi-mutations (only for the case of cancer); and 
        (7) Nuclear mutations (not important to aging).

        Project # 1: " A New Paradigm in Medicine and Health Care" [In progress]            
        Project Director: Dr. Alain L. Fymat    


        Project # 1: " Magnetic Resonance Imaging with Contrasting Nanomaterials" [In progress]            
        Project Director: Dr. Alain L. Fymat   


Project # 1: "Contributors to Sickness: Epigenetic Modulators of Several Gene Expressions" [In progress]            
        Project Director: Dr. Alain L. Fymat  

Project #2:  "Epigenetic Perspectives on Prevalent Human Diseases in Africa" [In progress]

Project Directors: Dr. Alain L. Fymat, Dr. Joachim Kapalanga, and Dr. Shiva Singh

Website Builder