Science Environment for Ecological Knowledge
Ecoinformatics site parent site of Partnership for Biodiversity Informatics site parent site of SEEK - Home
Science Environment for Ecological Knowledge









 

 

 



Beam Knowledge Rep Jan 04

Difference between version 2 and version 1:

Line 1 was replaced by line 1
- !! SEEK BEAM-KR Meeting Notes, January 4-6, 2004
+ !!! SEEK BEAM-KR Meeting Notes, January 4-6, 2004
Line 3 was replaced by line 3
- !! Topics
+ !! Topics
Lines 5-6 were replaced by lines 5-6
- * Use Case Development
- ** AMS
+ * Use Case Development
+ ** AMS
Line 9 was replaced by line 9
- *** It would be nice to have some concrete examples of scenarios, see Matt's scenario below as an example. It would be nice to have some use cases for the various versions of niche modeling being considered (native species, climate change, and so on).
+ *** It would be nice to have some concrete examples of scenarios, see Matt's scenario below as an example. It would be nice to have some use cases for the various versions of niche modeling being considered (native species, climate change, and so on).
Line 13 was replaced by line 13
- *** What about component integration, for reusing algorithms/components within niche-modeling, or post- niche modeling.
+ *** What about component integration, for reusing algorithms/components within niche-modeling, or post- niche modeling.
Line 15 was replaced by line 15
- ** Identify recognized example datasets that we can use (that weren't crafted to fit a specific model implementation)?
+ ** Identify recognized example datasets that we can use (that weren't crafted to fit a specific model implementation)?
Lines 17-22 were replaced by lines 17-22
- * Integration / Reuse
- ** What components exist, for example, for layer integration and so on? What components need to be explicitly written, and which are functions of SMS?
- ** What are the input and output data types (schemas) for niche modeling components?
- ** How should we handle GARP parameters?
- ** (Deana): Figure out whether we can/should split up GARP into separate actors. I think we should, if it's not too big of a programming issue.
- ** (Deana): Reusability of actors for other purposes. Example: the GARP model was developed for ecological niche modelers, but it is really just a logistic model that works on independent variables that are in the form of environmental layers, and dependent variables that are point data. It could be used in any other discipline that had similar needs. For example, if one wanted to predict locations in Florida that are susceptible to sinkhole formation, you could input the location of known sinkholes (points) and relevant environmental layers (hydrology, chemistry, topography, etc), and the GARP model would work just fine. I think this is a "user scenario", like Shawn is asking for. I can spend some time next week coming up with some of these, maybe with the help of the other domain scientists who are there.
+ * Integration / Reuse
+ ** What components exist, for example, for layer integration and so on? What components need to be explicitly written, and which are functions of SMS?
+ ** What are the input and output data types (schemas) for niche modeling components?
+ ** How should we handle GARP parameters?
+ ** (Deana): Figure out whether we can/should split up GARP into separate actors. I think we should, if it's not too big of a programming issue.
+ ** (Deana): Reusability of actors for other purposes. Example: the GARP model was developed for ecological niche modelers, but it is really just a logistic model that works on independent variables that are in the form of environmental layers, and dependent variables that are point data. It could be used in any other discipline that had similar needs. For example, if one wanted to predict locations in Florida that are susceptible to sinkhole formation, you could input the location of known sinkholes (points) and relevant environmental layers (hydrology, chemistry, topography, etc), and the GARP model would work just fine. I think this is a "user scenario", like Shawn is asking for. I can spend some time next week coming up with some of these, maybe with the help of the other domain scientists who are there.
Lines 24-27 were replaced by lines 24-27
- * Semantics of Niche Modeling:
- ** What is the ontology for niche modeling / GARP / GRASP?
- ** What is the ontology for inputs/outputs of components?
- ** What ontological information is required for pre- and post- niche modeling.
+ * Semantics of Niche Modeling:
+ ** What is the ontology for niche modeling / GARP / GRASP?
+ ** What is the ontology for inputs/outputs of components?
+ ** What ontological information is required for pre- and post- niche modeling.
Lines 30-34 were replaced by lines 30-34
- (Deana): I think, if I understand everything correctly, we need ontologies for:
- # Data discovery
- # Integration of selected datasets
- # Actor semantics
- # Pipeline semantics
+ (Deana): I think, if I understand everything correctly, we need ontologies for:
+ # Data discovery
+ # Integration of selected datasets
+ # Actor semantics
+ # Pipeline semantics
Line 36 was replaced by line 36
- (Deana): I think we also need to consider some sort of standardized metadata for actors/pipelines (metamodel? metaanalysis?) so that actors/pipelines can be searched and reused.
+ (Deana): I think we also need to consider some sort of standardized metadata for actors/pipelines (metamodel? metaanalysis?) so that actors/pipelines can be searched and reused.
Line 39 was replaced by line 39
- !!Scenarios
+ !!Scenarios
Lines 41-42 were replaced by lines 41-42
- * RangePrediction
- * MammalTestCase
+ * RangePrediction
+ * MammalTestCase
Line 44 was replaced by line 44
- !!Matt's Niche-Modeling Scenario:
+ !!Matt's Niche-Modeling Scenario:
Line 46 was replaced by line 46
- A scientist is interested in the native range of an oak species. The scientist first creates a semantic query -- a query posed against ontological information -- requesting (ecogrid) datasets that can be used as occurrence data for a particular oak species ('quercus rubrum'), over a specific spatial footprint, and over a specific time period. (This example is expressed over the space, time, and taxa context of measurement.)
+ A scientist is interested in the native range of an oak species. The scientist first creates a semantic query -- a query posed against ontological information -- requesting (ecogrid) datasets that can be used as occurrence data for a particular oak species ('quercus rubrum'), over a specific spatial footprint, and over a specific time period. (This example is expressed over the space, time, and taxa context of measurement.)
Line 48 was replaced by line 48
- The scientist then issues the query using the semantic mediation system, which performs a series of steps to construct the necessary underlying queries (query rewritings) to the ecogrid. The underlying queries return a set of datasets. These returned datasets are then further manipulated by the mediation system. For example, the datasets returned may need to be joined (to extract the occurrence data), pruned to fit into the desired footprint, converted to the correct presence measure (for example, the value '1' for presence), and irrelevant fields removed. At this point, the scientist may wish to remove some of the candidate datasets from further analysis. The datasets are then combined (unioned) to form a single, uniform input table.
+ The scientist then issues the query using the semantic mediation system, which performs a series of steps to construct the necessary underlying queries (query rewritings) to the ecogrid. The underlying queries return a set of datasets. These returned datasets are then further manipulated by the mediation system. For example, the datasets returned may need to be joined (to extract the occurrence data), pruned to fit into the desired footprint, converted to the correct presence measure (for example, the value '1' for presence), and irrelevant fields removed. At this point, the scientist may wish to remove some of the candidate datasets from further analysis. The datasets are then combined (unioned) to form a single, uniform input table.
Line 50 was replaced by line 50
- Next, the mediation system uses the implied footprint of the input table to query for (again, using the ecogrid) relevant environmental layers. The resulting layers are then integrated, which involves clipping the returned layers to the implied footprint, re-gridding the datasets to the same scale (based on the density of the presence/absence point datasets and environmental layers), and re-projecting the datasets to a common projection scheme (so that points? are correctly placed on a flat map).
+ Next, the mediation system uses the implied footprint of the input table to query for (again, using the ecogrid) relevant environmental layers. The resulting layers are then integrated, which involves clipping the returned layers to the implied footprint, re-gridding the datasets to the same scale (based on the density of the presence/absence point datasets and environmental layers), and re-projecting the datasets to a common projection scheme (so that points? are correctly placed on a flat map).
Line 52 was replaced by line 52
- Finally, the the rule set and resulting predication map are stored (in ecogrid), with appropriate metadata.
+ Finally, the the rule set and resulting predication map are stored (in ecogrid), with appropriate metadata.
Line 55 was replaced by line 55
- ----
+ ----
Line 57 was replaced by line 57
- !!January 4th
+ !!January 4th
Lines 59-60 were replaced by lines 59-60
- * Take the equations of Roughgarden text and import into Kepler (then send it to her to integrate into her textware)
- * How should MOML / EML / OWL be glued together? (Rich, Shawn, Chad)
+ * Take the equations of Roughgarden text and import into Kepler (then send it to her to integrate into her textware)
+ * How should MOML / EML / OWL be glued together? (Rich, Shawn, Chad)
Lines 62-67 were replaced by lines 62-67
- * Rich gave ~GrOWL demo and ezOWL demo (onto visualization tools)
- * SPECIFY group at KU (a platform for capturing data); Stan Blum has a nice interrelationship between concepts in biodiversity data; disintegrates when you want to discuss how to use the data
-
- * (Town) Experiment using datasets with the GARP model in Kepler
- ** //Point data// (lattitude/longitude for a specific species of an barrd owl...)
- ** //Pure validation/sampling// (sample of data points): plot points, impose an appropriate sized grid (automatically doing this needs additional specification and is based on the purpose of your study), then select grids (like checkerboard) for samples and data
+ * Rich gave ~GrOWL demo and ezOWL demo (onto visualization tools)
+ * SPECIFY group at KU (a platform for capturing data); Stan Blum has a nice interrelationship between concepts in biodiversity data; disintegrates when you want to discuss how to use the data
+
+ * (Town) Experiment using datasets with the GARP model in Kepler
+ ** //Point data// (lattitude/longitude for a specific species of an barrd owl...)
+ ** //Pure validation/sampling// (sample of data points): plot points, impose an appropriate sized grid (automatically doing this needs additional specification and is based on the purpose of your study), then select grids (like checkerboard) for samples and data
Lines 70-80 were replaced by lines 70-80
- * (Town) Desktop Garp (lifemapper / desktop garp)
- ** Select Data points
- *** Load (one) data set (ESRI shapefile, Excel, text file: species longitude lattitude)
- *** Gives species list of the different species in the species column
- *** Independent validation is a pure random sample (finest grid possible -- one occurrence point per grid)
- *** Percentage (default is one half) of the data points are specified used for training
- ** Select layers (note that layers and data points are chosen in any order)
- *** select a layer dataset, and can select some of the layers in the dataset to use
- *** can choose from all selected layers (default), all combinations of the selected layers, or all combinations of size n
- ** Optimization Parameters
- *** number of runs
+ * (Town) Desktop Garp (lifemapper / desktop garp)
+ ** Select Data points
+ *** Load (one) data set (ESRI shapefile, Excel, text file: species longitude lattitude)
+ *** Gives species list of the different species in the species column
+ *** Independent validation is a pure random sample (finest grid possible -- one occurrence point per grid)
+ *** Percentage (default is one half) of the data points are specified used for training
+ ** Select layers (note that layers and data points are chosen in any order)
+ *** select a layer dataset, and can select some of the layers in the dataset to use
+ *** can choose from all selected layers (default), all combinations of the selected layers, or all combinations of size n
+ ** Optimization Parameters
+ *** number of runs
Lines 82-96 were replaced by lines 82-96
- *** max iterations (number of generations to converge)
- *** rule types considered (atomic, range, negated range, logistic regrission)
- *** all combinations of the selected rules
- ** Best Subset Selection Parameters
- *** active
- *** percent omission
- *** total models (a ruleset that results from one run of the genetic algorithm) under hard omission threshold
- *** % of distribution (commission threshold)
- ** Projection Layers (where to apply the resulting rule sets)
- *** Available datasets
- *** Current datasets for project
- ** Output
- *** maps (bitmaps, ascii grids, arc/info grids)
- *** models (all or best subset)
- *** output directory
+ *** max iterations (number of generations to converge)
+ *** rule types considered (atomic, range, negated range, logistic regrission)
+ *** all combinations of the selected rules
+ ** Best Subset Selection Parameters
+ *** active
+ *** percent omission
+ *** total models (a ruleset that results from one run of the genetic algorithm) under hard omission threshold
+ *** % of distribution (commission threshold)
+ ** Projection Layers (where to apply the resulting rule sets)
+ *** Available datasets
+ *** Current datasets for project
+ ** Output
+ *** maps (bitmaps, ascii grids, arc/info grids)
+ *** models (all or best subset)
+ *** output directory
Lines 98-113 were replaced by lines 98-113
- *** (Town) For doing various types of predictions (invasive species, genetic modification, ...)
- *** (Town) Store a model (ruleset) and reuse is often useful for different types of predictions (as outputs of GARP)
- *** (Town) There are some motivations for integrating rule sets, e.g., for predicting based on predator prey
-
- !!January 5th
-
- * Semantic Mediation / KR within niche modeling and BEAM:
- ** Concensus that there aren't semantic challenges in the current vision/implementation of GARP in Kepler
- ** Many of even the interopolation tasks are not kr-problems (geospatial)
- ** There are some types of data that are semantically difficult, like productivity data, soil, biodiversity
- ** Ecological nich modeling: not much done on soil characteristics
- ** Terry's soils aggretation tool (takes soil data, water data, etc., and generates new, derived datasets useful for plant niche modeling)
- ** "High-resolution" route (finer resolution); as you go to the fine scale, the issues of resolution becomes more of an issue
- ** Climate and climatic variables are a big problem in terms of semantic issues; a group in LTER that we could bring in to deal with the issue (if we wish to incorporate climate info into BEAM)
- ** Taxonomic issues, e.g., all mammals
- ** Issues related to productivity, what is meant by diversity and biodiversity
+ *** (Town) For doing various types of predictions (invasive species, genetic modification, ...)
+ *** (Town) Store a model (ruleset) and reuse is often useful for different types of predictions (as outputs of GARP)
+ *** (Town) There are some motivations for integrating rule sets, e.g., for predicting based on predator prey
+
+ !!January 5th
+
+ * Semantic Mediation / KR within niche modeling and BEAM:
+ ** Concensus that there aren't semantic challenges in the current vision/implementation of GARP in Kepler
+ ** Many of even the interopolation tasks are not kr-problems (geospatial)
+ ** There are some types of data that are semantically difficult, like productivity data, soil, biodiversity
+ ** Ecological nich modeling: not much done on soil characteristics
+ ** Terry's soils aggretation tool (takes soil data, water data, etc., and generates new, derived datasets useful for plant niche modeling)
+ ** "High-resolution" route (finer resolution); as you go to the fine scale, the issues of resolution becomes more of an issue
+ ** Climate and climatic variables are a big problem in terms of semantic issues; a group in LTER that we could bring in to deal with the issue (if we wish to incorporate climate info into BEAM)
+ ** Taxonomic issues, e.g., all mammals
+ ** Issues related to productivity, what is meant by diversity and biodiversity
Lines 115-121 were replaced by lines 115-121
- ** The interpretation of the model (ruleset) is dependent on the framework of running the model (the assumptions used in building the models)
- ** Continental scale mammal climate change; what are the semantic issues?
- *** Mammal Taxonomic Concepts
- **** US National Museum data would be heterogeneous from DiGIR data
- *** Climate
- **** Current, future, and past climatologies need to "speak" the same language
- **** Climate layers (future and current) need to be compatible
+ ** The interpretation of the model (ruleset) is dependent on the framework of running the model (the assumptions used in building the models)
+ ** Continental scale mammal climate change; what are the semantic issues?
+ *** Mammal Taxonomic Concepts
+ **** US National Museum data would be heterogeneous from DiGIR data
+ *** Climate
+ **** Current, future, and past climatologies need to "speak" the same language
+ **** Climate layers (future and current) need to be compatible
Lines 123-124 were replaced by lines 123-124
- **** Process that involves going from weather data to climate data
- ** Papers: Grinnell, 1917, 1924; Hutchinson, 1959; McCarther 1972 (Geographic Ecology);
+ **** Process that involves going from weather data to climate data
+ ** Papers: Grinnell, 1917, 1924; Hutchinson, 1959; McCarther 1972 (Geographic Ecology);
Line 127 was replaced by line 127
- !! Participant List
+ !! Participant List
Lines 131-132 were replaced by lines 131-132
- -------------------------------------------
- [SemanticMediationCommunity]
+ -------------------------------------------
+ [SemanticMediationCommunity]

Back to Beam Knowledge Rep Jan 04, or to the Page History.