Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a BioOne web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact firstname.lastname@example.org with any questions.
A research plan must answer four questions: (i) What is the scientific question that the research seeks to answer? (ii) What type of investigation will the researcher conduct? (iii) What measurements will be made? (iv) What type of data analysis will be used, and will there be sufficient statistical power in the data to give an effective answer to the question? I illustrate how each of these questions can be answered. The first requires conceptual and propositional analysis to refine concepts and postulates so that an important scientific question is developed and one that is possible to answer. Questions ii, iii and iv together form a Data Statement that must answer why a particular type of investigation is appropriate and how that question will be answered. It must define the extent to which measurements represent the concepts in question and the accuracy and precision of those measurements. And, most important, the Data Statement must define the type of data analysis that will be used, including a definition of the statistical power necessary to answer the question effectively. The development of a Data Statement will usually involve exploratory analysis of the scientific question to explore the effectiveness of proposed measurements and to enable calculation of a sampling regimen that will provide adequate statistical power.
Over the last 50 years, ecological experiments under field conditions have exploded in number, type and scope. They remain complex because of intrinsic variability in ecological measures from place to place and time to time, requiring care in their design and implementation. An experiment and its design can only be sensibly considered after thought and knowledge are used to make clear the logical basis for doing the experiment, so that its results can be interpreted in a robust and reliable manner. There are different approaches to any sequence of components of an experiment. Here, a falsificationist methodology is considered, which relates observations (things we know) to models (what we think explain the observations) to hypotheses (predictions about what will happen under novel circumstances if the model(s) is (are) correct). Experiments are then designed to create the novel circumstances in order to test the predictions. How an explicit framework influences the design of experiments is discussed, including the nature of replication and of controls for artefacts. Improving the match between natural historical and ecological knowledge and the interpretation of results of experiments will always help advance the discipline of ecology.
Behavioural ecology is the study of the ecological and evolutionary bases for variation in animal behaviour, answering proximate and ultimate questions of why animals behave the way they do. The laboratory setting enables the isolation and control of specific variables, the removal or randomisation of confounding factors and simplifies the tracking of an individual's behaviour. Laboratory experiments, in parallel and in comparison with field studies, are valuable for answering specific questions and certainly most ecological investigations can benefit from a combined experimental approach. Here we focus on four model areas of behavioural ecological research: mate selection, nepotism, foraging and dominance. Using both vertebrate and invertebrate examples we consider the advantages and disadvantages of laboratory experiments and the unique information they can provide, including a comparison of three laboratory research contexts; neutral, natural and contrived. We conclude by discribing how laboratory studies can help us to understand the contexts in which behavioural variation occurs in the natural environment.
Many problems in the analysis of ecological data have the format where there is an observed response that may be predicted by several covariates. Although the response can take several forms (e.g. measurements, counts, observations of presence/absence), and the covariates can also vary (e.g. be measurements themselves, or be grouped according to the treatment applied, the time or location of of sampling, etc.), most of these problems can be handled in a single framework, the Generalized Linear Mixed Model (GLMM). The framework encompasses regression, ANOVA, generalized linear models, and equivalent models with random as well as fixed effects. Here, the different parts of the GLMM are described, building from regression and ANOVA to show how the extra components — the wider range of distributions, and random effects — can be added into the same framework, and how the parameters of the fitted model can be estimated and interpreted. Being able to handle data with GLMMs helps ecologists to analyse the majority of their data.
The practice of statistical analysis and inference in ecology is critically reviewed. The dominant doctrine of null hypothesis significance testing (NHST) continues to be applied ritualistically and mindlessly. This dogma is based on superficial understanding of elementary notions of frequentist statistics in the 1930s, and is widely disseminated by influential textbooks targeted at biologists. It is characterized by silly null hypotheses and mechanical dichotomous division of results being “significant” (P < 0.05) or not. Simple examples are given to demonstrate how distant the prevalent NHST malpractice is from the current mainstream practice of professional statisticians. Masses of trivial and meaningless “results” are being reported, which are not providing adequate quantitative information of scientific interest. The NHST dogma also retards progress in the understanding of ecological systems and the effects of management programmes, which may at worst contribute to damaging decisions in conservation biology. In the beginning of this millennium, critical discussion and debate on the problems and shortcomings of NHST has intensified in ecological journals. Alternative approaches, like basic point and interval estimation of effect sizes, likelihood-based and information theoretic methods, and the Bayesian inferential paradigm, have started to receive attention. Much is still to be done in efforts to improve statistical thinking and reasoning of ecologists and in training them to utilize appropriately the expanded statistical toolbox. Ecologists should finally abandon the false doctrines and textbooks of their previous statistical gurus. Instead they should more carefully learn what leading statisticians write and say, collaborate with statisticians in teaching, research, and editorial work in journals.
An important challenge for scientists, especially those early in their careers, is preparing an effective article for submission to a peer-reviewed journal. Here, I present a number of suggestions on how it could be accomplished. This action plan addresses (1) how to approach a topic by developing a story line connected with what is already known in the field, (2) how to most efficiently organize and sequence one's efforts by starting with the descriptive parts of the manuscript and subsequently moving to the more interpretive parts, and (3) the advantages of using bibliographic software to facilitate quick and accurate referencing. I suggest that authors should aim to produce a story that does not overcomplicate the topic under investigation while at the same time presenting a full and accurate coverage and interpretation of the data. Importantly, the preparation of manuscripts becomes easier with time and practice, as individuals hone their own style and approach to this task.