Authors: J. Hiddink, T. Hutton, S. Jennings, and M. Kaiser, 2006.
This paper analyzes the effect of area closures and effort reduction on benthic populations. The data is drawn from records of fleets in the North Sea. The results are discouraging: most closures have net negative yields for benthic beasts.
The computations are obscure. Hiddink et al. claim to be using a random utility model, but the equations are nowhere to be found. The entire paper has one equation, basically explaining how effort is redistributed in response to a closure. How fish redistribute themselve, or how populations grow in response to a diversion of fishing effort, remain unclear. The authors cite themselves as the source of some model in which presumably all is explained, but I didn't chase that model down.
TODO:
1. Chase down the paper with the benthic model.
Tuesday, November 9, 2010
The cost of sea turtle preservation: the case of Hawaii's Pelagic Longliners
Authors: R. Curtis and R. Hicks, 2000.
This paper is another computational game rigged around sketchy "data" from logbooks, market samples, etc. The game has a name: the Random Utility Model (RUM), which I suspect is a standard tool in economics. As far as I can tell, the authors start with a log utility function, and linearize expected utility around a total of nine quantities, each of which is presumably estimable. The results are actual dollar amounts for how much each closure will impact each fisherman, on average.
This paper has the virtue of being concise, but the choice of variable names and the total lack of exposition make it vaguely intolerable. Nothing TODO.
This paper is another computational game rigged around sketchy "data" from logbooks, market samples, etc. The game has a name: the Random Utility Model (RUM), which I suspect is a standard tool in economics. As far as I can tell, the authors start with a log utility function, and linearize expected utility around a total of nine quantities, each of which is presumably estimable. The results are actual dollar amounts for how much each closure will impact each fisherman, on average.
This paper has the virtue of being concise, but the choice of variable names and the total lack of exposition make it vaguely intolerable. Nothing TODO.
Monday, November 8, 2010
Fishing effort redistribution in response to area closures
Authors: J. Powers and S. Abeare, 2009.
These authors leverage what they call an "ideal free distribution" (IDF) to model fishermen's self-redistribution after area closures. In essence, the IDF concept just means that the quality of an area changes in response to an influx of fishermen. On a practical level, the upshot is a complicated numerical simulation in which there are a number of regions, each of a certain richness, and fishermen move sequentially in such a way as to optimize their profits. The "results" section is based on numbers pilfered from a genuine fisheries monitoring agency; I wrote to some chap to see if it was publicly available, but apparently one needs clearances of some sort.
The value of this paper seems to be in its ideas, not its numerics.
TODO:
1. implement the numerics
2. see how sensitive the results might be
These authors leverage what they call an "ideal free distribution" (IDF) to model fishermen's self-redistribution after area closures. In essence, the IDF concept just means that the quality of an area changes in response to an influx of fishermen. On a practical level, the upshot is a complicated numerical simulation in which there are a number of regions, each of a certain richness, and fishermen move sequentially in such a way as to optimize their profits. The "results" section is based on numbers pilfered from a genuine fisheries monitoring agency; I wrote to some chap to see if it was publicly available, but apparently one needs clearances of some sort.
The value of this paper seems to be in its ideas, not its numerics.
TODO:
1. implement the numerics
2. see how sensitive the results might be
Harvesting strategies for a randomly fluctuating population
Authors: D. Ludwig, 1980.
This is a short but deceptively simple paper. Ludwig compares several harvesting strategies under three different growth models. The stochastic component of each process is described by equations of the form
$ E(dN) = [f(N) - N - qeN]dt$
$ E(dN^2) = 2 \epsilon N^2 dt$
where $E$ represents expectation and $e$ represents effort. The models are all the basic form
$ dN/dt = f(N) - N - qEN$,
where $f$ takes different forms depending on the particular growth model. The models are thus not spatially explicit. Randomness is dispensed with by assuming an optimal feedback policy and deriving a differential equation for discounted expected yield as a function of initial population size.
TODO:
figure out how how he generated his tables (and maybe re-generate them.)
This is a short but deceptively simple paper. Ludwig compares several harvesting strategies under three different growth models. The stochastic component of each process is described by equations of the form
$ E(dN) = [f(N) - N - qeN]dt$
$ E(dN^2) = 2 \epsilon N^2 dt$
where $E$ represents expectation and $e$ represents effort. The models are all the basic form
$ dN/dt = f(N) - N - qEN$,
where $f$ takes different forms depending on the particular growth model. The models are thus not spatially explicit. Randomness is dispensed with by assuming an optimal feedback policy and deriving a differential equation for discounted expected yield as a function of initial population size.
TODO:
figure out how how he generated his tables (and maybe re-generate them.)
Monday, November 1, 2010
Oceanoraphic preferences of Atlantic bluefin tuna, Thunnus thynnus, on their Gulf of Mexico breeding grounds
Authors: S. Teo, A. Boustany, and B. Block, 2007.
This paper is a statistical follow-up to the paper Annual migrations.... (reviewed below.) The methodology is rather clever. The brute observables are the same as before, i.e. temperature, depth, and location from the pop up archival tags of 28 tuna. To analyze to what extent the inferred habitat preferences represent real preferences, as opposed to stochastic fluctuations, the authors plumb a number of databases containing such oceanographic information as wind speed, temperature, etc. for the entire Gulf. At any given point $(x,y,z)$ within the Gulf proper, they "sample" the published data by convolving it with a Gaussian kernel, thus mitigating observation and sensing errors. Monte Carlo sample paths are generated on a fish-by-fish basis by a quasi-Brownian process in which total daily movement corresponds exactly to the recorded total daily movement of the fish, but the direction of the movement is random. The environmental profiles along these paths describe the set of all possible habitats.
The authors use two models to assess habitat preferences. The first involves the Chesson Preference Index, which for a single variable (e.g. temperature) is calculated as
$ \alpha_i = \frac{(o_i/\pi_i)}{\sum_j^n (o_j/\pi_j)}$,
where $o_i$ is the observed sample proportion of used units in the $i$th habitat type, and $\pi_i$ is the sample proportion of available units (with the later calculated via the monte carlo simulations described above.) Note that the precise value of the index depends on the Monte Carlo simulation--the authors generated 10,000 sets of 10,000 paths and calculated the CPI for each, thus generating a histogram of CPI's for each environmental variable.
The problem with the CPI approach is that a) it uses bins, which create artifacts, and b)it leverages Monte Carlo techniques, which may result in false positives. As a consequence,the authors evoke a discrete choice model to calculate the resource selection function, defined as the numerator of the expression
$p(i) = \frac{\Pi_{j=1}^p e^{\beta_i x_{ij}}}{\sum_{k = U'\cup A} \Pi_{j=1}^p e^{\beta_i x_{kj}}$,
where the $\beta_i$ are coefficients to be estimated, the $x_{ij}$ represent the value of oceanogrpahic parameter $i$ in area $j$, $U'$ is the set of used areas and $A$ is the set of total areas in the Monte Carlo sample. The $\beta$s are calculated by minimizing the likelihood function
$L(\beta_1, \cdots, \beta_p) = \Pi_{j=n_1}^{n_u} p(j)$,
a task the authors tackled with the help of the Cox proportional hazards function.
The upshot of the analysis is that the histograms generated by merely binning the observables were pretty close to those generated by this convoluted statistical analysis, with one or two exceptions.
This paper is a statistical follow-up to the paper Annual migrations.... (reviewed below.) The methodology is rather clever. The brute observables are the same as before, i.e. temperature, depth, and location from the pop up archival tags of 28 tuna. To analyze to what extent the inferred habitat preferences represent real preferences, as opposed to stochastic fluctuations, the authors plumb a number of databases containing such oceanographic information as wind speed, temperature, etc. for the entire Gulf. At any given point $(x,y,z)$ within the Gulf proper, they "sample" the published data by convolving it with a Gaussian kernel, thus mitigating observation and sensing errors. Monte Carlo sample paths are generated on a fish-by-fish basis by a quasi-Brownian process in which total daily movement corresponds exactly to the recorded total daily movement of the fish, but the direction of the movement is random. The environmental profiles along these paths describe the set of all possible habitats.
The authors use two models to assess habitat preferences. The first involves the Chesson Preference Index, which for a single variable (e.g. temperature) is calculated as
$ \alpha_i = \frac{(o_i/\pi_i)}{\sum_j^n (o_j/\pi_j)}$,
where $o_i$ is the observed sample proportion of used units in the $i$th habitat type, and $\pi_i$ is the sample proportion of available units (with the later calculated via the monte carlo simulations described above.) Note that the precise value of the index depends on the Monte Carlo simulation--the authors generated 10,000 sets of 10,000 paths and calculated the CPI for each, thus generating a histogram of CPI's for each environmental variable.
The problem with the CPI approach is that a) it uses bins, which create artifacts, and b)it leverages Monte Carlo techniques, which may result in false positives. As a consequence,the authors evoke a discrete choice model to calculate the resource selection function, defined as the numerator of the expression
$p(i) = \frac{\Pi_{j=1}^p e^{\beta_i x_{ij}}}{\sum_{k = U'\cup A} \Pi_{j=1}^p e^{\beta_i x_{kj}}$,
where the $\beta_i$ are coefficients to be estimated, the $x_{ij}$ represent the value of oceanogrpahic parameter $i$ in area $j$, $U'$ is the set of used areas and $A$ is the set of total areas in the Monte Carlo sample. The $\beta$s are calculated by minimizing the likelihood function
$L(\beta_1, \cdots, \beta_p) = \Pi_{j=n_1}^{n_u} p(j)$,
a task the authors tackled with the help of the Cox proportional hazards function.
The upshot of the analysis is that the histograms generated by merely binning the observables were pretty close to those generated by this convoluted statistical analysis, with one or two exceptions.
Subscribe to:
Comments (Atom)