Author: A. Hastings, 1988.
This very brief article argues that food web theory is not an adequate approach for understanding questions about stability. He makes several salient points:
1. "Stability" is not the same thing as "persistence"--the latter implies species never go extinct, the former is a mathematical construct that may or may not be applicable to biological systems.
2. Systems may not be persistent, but may still have stable equilibria or limit cycles (e.g. a one-species system with an Allee effect.)
3. Persistence does not imply stability. Indeed, nonequilibrium solutions are important in allowing multiple species to coexist on limited resources.
4. There are a few key "structural" elements to ecological models, including age, spatial distribution, genetic and phenotypic patterns. Hastings argues that non-linear density dependence is critical, as it allows complex dynamics (food web models are often based on Lotka-Volterra dynamics, and are thus globally stable; introducing non-linearities in systems with even three species can lead to chaos.) He also argues that including age or stage structure is very important. Food web theorists tend to include such things by introducing the idea of "trophic" species, but Hastings argues that this notion is incompatible with dynamic models.
The article is summed up in the conclusion:
"The understanding of stability and dynamics must be based on detailed models, which include structure within species. Food web theory and general models may be appropriate for questions at the static level, but they are not detailed enough to understand stability questions." In other words, stability is not a matter for food webs, but for detailed, concrete systems.
Cocodrilo's Research Log
Wednesday, July 13, 2011
Fisheries Management--An Essay for Ecologists
Author: P. A. Larkin, 1978
This essay is a limpid overview of fisheries science as it stood in the late seventies. My suspicion is that many of the issues Larkin flags continue to be relevant today. For example, he discusses the importance of optimal control as a modeling tool, as well as conceptually rich multi-species models. He also theorizes that there may come a day when specifically targeting mid-sized fish is viable, a dream that may long since have become reality (it would be interesting to find out.) Larkin discusses the utility of numerical modeling. He says that a "computer simulation can be a marvelous crutch to the imagination", though he cautions against modeling that is so system-specific that it lacks scientific generality. He debunks the common modeling ruse of assuming catch is proportional to abundance. He concludes by arguing for "experimental" management practices, i.e. adaptive types of control, and hopes that aquaculture and farming will provide solutions in the future. This is a good paper to return to for quotes to spike technical papers. A few examples: "The central problem of fisheries science remains: how to manipulate the circumstances of a fishery to social and economic advantage within some constraints of ecological prudence." "For better or for worse, most contemporary theory on the regulation of commercial fish populations is based on a huge mass of circumstantial evidence in the form of catch statistics." "... there is wide margin for wonder about whether density-dependent or density-independent processes more commonly regulate abundance, the usual implicit assumption being that the former prevail on a state set by the latter." "The machineries of density-dependent regulation commonly observed in fisheries nevertheless include: suppression of growth rate with an associated delay in age of maturity, and to a lesser extent a decline in fecundity; cannibalism, particularly on young of the year; predation, commonly by a wide range of species; and parasitisms and diseases (concerning which there are few quantitative epidemiological data)." "It is most commonly assumed that environmental factors influence survival multiplicatively, and hence generate log normal-type distributions in which there is equal probability that animals will be half as abundant or twice as abundant as would be the case deterministically."
This essay is a limpid overview of fisheries science as it stood in the late seventies. My suspicion is that many of the issues Larkin flags continue to be relevant today. For example, he discusses the importance of optimal control as a modeling tool, as well as conceptually rich multi-species models. He also theorizes that there may come a day when specifically targeting mid-sized fish is viable, a dream that may long since have become reality (it would be interesting to find out.) Larkin discusses the utility of numerical modeling. He says that a "computer simulation can be a marvelous crutch to the imagination", though he cautions against modeling that is so system-specific that it lacks scientific generality. He debunks the common modeling ruse of assuming catch is proportional to abundance. He concludes by arguing for "experimental" management practices, i.e. adaptive types of control, and hopes that aquaculture and farming will provide solutions in the future. This is a good paper to return to for quotes to spike technical papers. A few examples: "The central problem of fisheries science remains: how to manipulate the circumstances of a fishery to social and economic advantage within some constraints of ecological prudence." "For better or for worse, most contemporary theory on the regulation of commercial fish populations is based on a huge mass of circumstantial evidence in the form of catch statistics." "... there is wide margin for wonder about whether density-dependent or density-independent processes more commonly regulate abundance, the usual implicit assumption being that the former prevail on a state set by the latter." "The machineries of density-dependent regulation commonly observed in fisheries nevertheless include: suppression of growth rate with an associated delay in age of maturity, and to a lesser extent a decline in fecundity; cannibalism, particularly on young of the year; predation, commonly by a wide range of species; and parasitisms and diseases (concerning which there are few quantitative epidemiological data)." "It is most commonly assumed that environmental factors influence survival multiplicatively, and hence generate log normal-type distributions in which there is equal probability that animals will be half as abundant or twice as abundant as would be the case deterministically."
Tuesday, November 9, 2010
Predicting the effects of area closures and fishing effort restrictions on the production, biomass, and species richness of benthic invertebrate communities
Authors: J. Hiddink, T. Hutton, S. Jennings, and M. Kaiser, 2006.
This paper analyzes the effect of area closures and effort reduction on benthic populations. The data is drawn from records of fleets in the North Sea. The results are discouraging: most closures have net negative yields for benthic beasts.
The computations are obscure. Hiddink et al. claim to be using a random utility model, but the equations are nowhere to be found. The entire paper has one equation, basically explaining how effort is redistributed in response to a closure. How fish redistribute themselve, or how populations grow in response to a diversion of fishing effort, remain unclear. The authors cite themselves as the source of some model in which presumably all is explained, but I didn't chase that model down.
TODO:
1. Chase down the paper with the benthic model.
This paper analyzes the effect of area closures and effort reduction on benthic populations. The data is drawn from records of fleets in the North Sea. The results are discouraging: most closures have net negative yields for benthic beasts.
The computations are obscure. Hiddink et al. claim to be using a random utility model, but the equations are nowhere to be found. The entire paper has one equation, basically explaining how effort is redistributed in response to a closure. How fish redistribute themselve, or how populations grow in response to a diversion of fishing effort, remain unclear. The authors cite themselves as the source of some model in which presumably all is explained, but I didn't chase that model down.
TODO:
1. Chase down the paper with the benthic model.
The cost of sea turtle preservation: the case of Hawaii's Pelagic Longliners
Authors: R. Curtis and R. Hicks, 2000.
This paper is another computational game rigged around sketchy "data" from logbooks, market samples, etc. The game has a name: the Random Utility Model (RUM), which I suspect is a standard tool in economics. As far as I can tell, the authors start with a log utility function, and linearize expected utility around a total of nine quantities, each of which is presumably estimable. The results are actual dollar amounts for how much each closure will impact each fisherman, on average.
This paper has the virtue of being concise, but the choice of variable names and the total lack of exposition make it vaguely intolerable. Nothing TODO.
This paper is another computational game rigged around sketchy "data" from logbooks, market samples, etc. The game has a name: the Random Utility Model (RUM), which I suspect is a standard tool in economics. As far as I can tell, the authors start with a log utility function, and linearize expected utility around a total of nine quantities, each of which is presumably estimable. The results are actual dollar amounts for how much each closure will impact each fisherman, on average.
This paper has the virtue of being concise, but the choice of variable names and the total lack of exposition make it vaguely intolerable. Nothing TODO.
Monday, November 8, 2010
Fishing effort redistribution in response to area closures
Authors: J. Powers and S. Abeare, 2009.
These authors leverage what they call an "ideal free distribution" (IDF) to model fishermen's self-redistribution after area closures. In essence, the IDF concept just means that the quality of an area changes in response to an influx of fishermen. On a practical level, the upshot is a complicated numerical simulation in which there are a number of regions, each of a certain richness, and fishermen move sequentially in such a way as to optimize their profits. The "results" section is based on numbers pilfered from a genuine fisheries monitoring agency; I wrote to some chap to see if it was publicly available, but apparently one needs clearances of some sort.
The value of this paper seems to be in its ideas, not its numerics.
TODO:
1. implement the numerics
2. see how sensitive the results might be
These authors leverage what they call an "ideal free distribution" (IDF) to model fishermen's self-redistribution after area closures. In essence, the IDF concept just means that the quality of an area changes in response to an influx of fishermen. On a practical level, the upshot is a complicated numerical simulation in which there are a number of regions, each of a certain richness, and fishermen move sequentially in such a way as to optimize their profits. The "results" section is based on numbers pilfered from a genuine fisheries monitoring agency; I wrote to some chap to see if it was publicly available, but apparently one needs clearances of some sort.
The value of this paper seems to be in its ideas, not its numerics.
TODO:
1. implement the numerics
2. see how sensitive the results might be
Harvesting strategies for a randomly fluctuating population
Authors: D. Ludwig, 1980.
This is a short but deceptively simple paper. Ludwig compares several harvesting strategies under three different growth models. The stochastic component of each process is described by equations of the form
$ E(dN) = [f(N) - N - qeN]dt$
$ E(dN^2) = 2 \epsilon N^2 dt$
where $E$ represents expectation and $e$ represents effort. The models are all the basic form
$ dN/dt = f(N) - N - qEN$,
where $f$ takes different forms depending on the particular growth model. The models are thus not spatially explicit. Randomness is dispensed with by assuming an optimal feedback policy and deriving a differential equation for discounted expected yield as a function of initial population size.
TODO:
figure out how how he generated his tables (and maybe re-generate them.)
This is a short but deceptively simple paper. Ludwig compares several harvesting strategies under three different growth models. The stochastic component of each process is described by equations of the form
$ E(dN) = [f(N) - N - qeN]dt$
$ E(dN^2) = 2 \epsilon N^2 dt$
where $E$ represents expectation and $e$ represents effort. The models are all the basic form
$ dN/dt = f(N) - N - qEN$,
where $f$ takes different forms depending on the particular growth model. The models are thus not spatially explicit. Randomness is dispensed with by assuming an optimal feedback policy and deriving a differential equation for discounted expected yield as a function of initial population size.
TODO:
figure out how how he generated his tables (and maybe re-generate them.)
Monday, November 1, 2010
Oceanoraphic preferences of Atlantic bluefin tuna, Thunnus thynnus, on their Gulf of Mexico breeding grounds
Authors: S. Teo, A. Boustany, and B. Block, 2007.
This paper is a statistical follow-up to the paper Annual migrations.... (reviewed below.) The methodology is rather clever. The brute observables are the same as before, i.e. temperature, depth, and location from the pop up archival tags of 28 tuna. To analyze to what extent the inferred habitat preferences represent real preferences, as opposed to stochastic fluctuations, the authors plumb a number of databases containing such oceanographic information as wind speed, temperature, etc. for the entire Gulf. At any given point $(x,y,z)$ within the Gulf proper, they "sample" the published data by convolving it with a Gaussian kernel, thus mitigating observation and sensing errors. Monte Carlo sample paths are generated on a fish-by-fish basis by a quasi-Brownian process in which total daily movement corresponds exactly to the recorded total daily movement of the fish, but the direction of the movement is random. The environmental profiles along these paths describe the set of all possible habitats.
The authors use two models to assess habitat preferences. The first involves the Chesson Preference Index, which for a single variable (e.g. temperature) is calculated as
$ \alpha_i = \frac{(o_i/\pi_i)}{\sum_j^n (o_j/\pi_j)}$,
where $o_i$ is the observed sample proportion of used units in the $i$th habitat type, and $\pi_i$ is the sample proportion of available units (with the later calculated via the monte carlo simulations described above.) Note that the precise value of the index depends on the Monte Carlo simulation--the authors generated 10,000 sets of 10,000 paths and calculated the CPI for each, thus generating a histogram of CPI's for each environmental variable.
The problem with the CPI approach is that a) it uses bins, which create artifacts, and b)it leverages Monte Carlo techniques, which may result in false positives. As a consequence,the authors evoke a discrete choice model to calculate the resource selection function, defined as the numerator of the expression
$p(i) = \frac{\Pi_{j=1}^p e^{\beta_i x_{ij}}}{\sum_{k = U'\cup A} \Pi_{j=1}^p e^{\beta_i x_{kj}}$,
where the $\beta_i$ are coefficients to be estimated, the $x_{ij}$ represent the value of oceanogrpahic parameter $i$ in area $j$, $U'$ is the set of used areas and $A$ is the set of total areas in the Monte Carlo sample. The $\beta$s are calculated by minimizing the likelihood function
$L(\beta_1, \cdots, \beta_p) = \Pi_{j=n_1}^{n_u} p(j)$,
a task the authors tackled with the help of the Cox proportional hazards function.
The upshot of the analysis is that the histograms generated by merely binning the observables were pretty close to those generated by this convoluted statistical analysis, with one or two exceptions.
This paper is a statistical follow-up to the paper Annual migrations.... (reviewed below.) The methodology is rather clever. The brute observables are the same as before, i.e. temperature, depth, and location from the pop up archival tags of 28 tuna. To analyze to what extent the inferred habitat preferences represent real preferences, as opposed to stochastic fluctuations, the authors plumb a number of databases containing such oceanographic information as wind speed, temperature, etc. for the entire Gulf. At any given point $(x,y,z)$ within the Gulf proper, they "sample" the published data by convolving it with a Gaussian kernel, thus mitigating observation and sensing errors. Monte Carlo sample paths are generated on a fish-by-fish basis by a quasi-Brownian process in which total daily movement corresponds exactly to the recorded total daily movement of the fish, but the direction of the movement is random. The environmental profiles along these paths describe the set of all possible habitats.
The authors use two models to assess habitat preferences. The first involves the Chesson Preference Index, which for a single variable (e.g. temperature) is calculated as
$ \alpha_i = \frac{(o_i/\pi_i)}{\sum_j^n (o_j/\pi_j)}$,
where $o_i$ is the observed sample proportion of used units in the $i$th habitat type, and $\pi_i$ is the sample proportion of available units (with the later calculated via the monte carlo simulations described above.) Note that the precise value of the index depends on the Monte Carlo simulation--the authors generated 10,000 sets of 10,000 paths and calculated the CPI for each, thus generating a histogram of CPI's for each environmental variable.
The problem with the CPI approach is that a) it uses bins, which create artifacts, and b)it leverages Monte Carlo techniques, which may result in false positives. As a consequence,the authors evoke a discrete choice model to calculate the resource selection function, defined as the numerator of the expression
$p(i) = \frac{\Pi_{j=1}^p e^{\beta_i x_{ij}}}{\sum_{k = U'\cup A} \Pi_{j=1}^p e^{\beta_i x_{kj}}$,
where the $\beta_i$ are coefficients to be estimated, the $x_{ij}$ represent the value of oceanogrpahic parameter $i$ in area $j$, $U'$ is the set of used areas and $A$ is the set of total areas in the Monte Carlo sample. The $\beta$s are calculated by minimizing the likelihood function
$L(\beta_1, \cdots, \beta_p) = \Pi_{j=n_1}^{n_u} p(j)$,
a task the authors tackled with the help of the Cox proportional hazards function.
The upshot of the analysis is that the histograms generated by merely binning the observables were pretty close to those generated by this convoluted statistical analysis, with one or two exceptions.
Subscribe to:
Comments (Atom)