Christofer Barret - "---"
Kenneth Buetow - "Enabling the molecular medicine revolution through network-centric biomedicine"
To deliver on the promise of next generation treatment and prevention strategies in cancer, we must address its multiple dimensions. The full complement of the diverse fields of modern biomedicine are engaged in the assault on this complexity. These disciplines are armed with the latest tools of technology, generating mountains of data. Each surpasses the next in their unprecedented and novel view of the fundamental nature of cancer. Each contributes a vital thread of insight. Information technology provides a promising loom on which the threads of insight can be woven.
Bioinformatics facilitates the electronic representation, redistribution, and integration of biomedical data. It makes information accessible both within and between the allied fields of cancer research. It weaves the disparate threads of research information into a rich tapestry of biomedical knowledge. Bioinformatics is increasingly inseparable from the conduct of research within each discipline. The linear nature of science is being transformed into a spiral with bioinformatics joining the loose ends and facilitating progressive cycles of hypothesis generation and knowledge creation.
To facilitate the rapid deployment of bioinformatics infrastructure into the cancer research community the National Cancer Institute (NCI) is undertaking the cancer Biomedical Informatics Grid, or caBIGª. caBIGª, is a voluntary virtual informatics infrastructure that connects data, research tools, scientists, and organizations to leverage their combined strengths and expertise in an open environment with common standards and shared tools. Effectively forming a World Wide Web of cancer research, caBIGª promises to speed progress in all aspects of cancer research and care including etiologic research, prevention, early detection, and treatment by breaking down technical and collaborative barriers.
Researchers in all disciplines have struggled with the integration of biomedical informatics tools and data; the caBIGª program demonstrates this important capability in the well-defined and critical area of cancer research, by planning for, developing, and deploying technologies which have wide applicability outside the cancer community. Built on the principles of open source, open access, open development, and federation, caBIGª infrastructure and tools are open and readily available to all who could benefit from the information accessible through its shared environment. caBIGª partners are developing or providing standards-based biomedical research applications, infrastructure, and data sets. The implementation of common standards and a unifying architecture ensures interoperability of tools, facilitating collaboration, data sharing, and streamlining research activities across organizations and disciplines.
The caBIGª effort has recognized that in addition to new infrastructure new information models are required to capture the complexity of cancer. While the biomedical community continues to harvest the benefits of genome views of biologic information, it has been clear from the founding of genetics that biology acts through complex networks of interacting genes. Information models and a new generation of analytic tools that utilizing these networks are key to translating discover to practical intervention.
Andrea Cavagna - "The problems of access policy to the empirical data of the STARFLAG project"
The EC funded STARFLAG project collected a huge number of stereoscopic photographs on the flight of starling flocks in the wild, with the aim of gathering a better understanding of collective animal behaviour in three dimensions. In order to extract the 3D positions of individual birds from the raw photographs STARFLAG developed some new and groundbreaking techniques, which brought the state of the art of the field from few tens to several thousands of animals. As soon as STARFLAG produced the 3D data, numerous requests of having access to these data were formulated by many groups around the world. However, the lack of standardisation both of the data format and access policy in this particular field made it difficult to deal with requests. In this talk I will explain what policy STARFLAG followed in the management and access to the data, at various levels of their elaboration.
Jennifer Dunne - "Challenges and opportunities for ecological informatics"
One of the most difficult problems ecological researchers face is access to, and analysis of, diverse and dispersed information relating to organisms and their interactions with each other and the environment at multiple spatial and temporal scales. The types of questions ecologists are called on to address increasingly require integration and synthesis of diverse databases and knowledge sources. I will discuss some of the ecoinformatic techonologies and tools that are being developed to facilitate more data-rich and conceptually integrative ecological research. Such tools will be extensible to any research that makes use of heterogeneous types and sources of data.
Plamen Ivanov - "Developing large databases and quantifying complexity in physiologic dynamics"
Traditionally, physiological systems are considered as operating according to the classical principle of homeostasis, which postulates that a system returns to equilibrium after perturbation, and that linear causality controls the pathways of physiological interaction [1,2,3]. Such classical systems are often characterized by a single dominant time scale, which is related to the time interval the system
needs to reach equilibrium after perturbation. However, in the last decade, empirical observations show that even under healthy basal (resting) conditions, physiologic systems under neural regulation exhibit ``noisy'' fluctuations characterized by self-similar (fractal and multifractal) organization over multiple time scales [4,5,6] -- a
behavior markedly different from the one postulated by the classical principle of homeostasis and resembling certain physical systems away from equilibrium. Moreover, the scale-invariant temporal organization in physiologic fluctuations changes under different physiologic states and with pathological conditions, suggesting an even higher level of complexity.
We will review recent progress in understanding physiologic dynamics by adapting concepts and methods from statistical physics. In particular, we will consider examples of integrated physiologic systems under neural regulation such as cardiac dynamics, locomotion and sleep [7,8,9,10], where scale-invariant and nonlinear features emerge as a result of complex multiple-component feedback interactions. We will briefly outline the potential clinical utility of our findings.
We will discuss the problems we encountered in the process of our research investigations with acquiring appropriate physiologic data, our experience in developing standardized physiologic databases, as well as the role of synthetic databases in testing the performance of analytic methods and in confirming the validity of empirical results.
Gene Stanley - "Statistical physics in physiology and medicine"
This talk will introduce some contributions that statistical physics might bring to the important problems dealth with in the fields of physiology and medicine. We will illustrate these potential contributions by discussing a few case studies. One ongoing study involves coupled computer simulations and experiments of relevance to AD (Alzheimer Disease). Specifically, we will describe recent in silico studies of amyloid beta-protein carried out by a group of computational physicists at Boston University in strong collaboration with a group at Harvard Medical School and UCLA. Our ``scientific motivation'' is to understand what AD is and what triggers its onset. Our ``practical motivation'' is that if we understand the trigger, we can possibly find a vaccine to prevent the trigger being pulled. Our ``working hypothesis'' is that a certain polymer, amyloid, aggregates to form clumps called plaques by joining at a specific place on the peptide. Our goal is to use a million-dollar computer awarded to this project to find the place that two amyloid polymers join in the ``first three minutes'' of AD.