Line 3 was replaced by line 3 |
- These will be rearranged later into a more coherent agenda after they have been collected |
+ ! ''Moderator: Matt Jones'' |
At line 4 added 2 lines. |
+ ! Agenda |
+ * Updates: Overview and Progress |
At line 8 added 3 lines. |
+ |
+ |
+ |
At line 9 added 1 line. |
+ ** relationship to an 'Actor Repository' and EcoGrid |
At line 12 added 1 line. |
+ *** How do we annotate datasets, and what can we do with it if we do |
At line 14 added 1 line. |
+ ** Choosing one (or more) distributing computing platforms |
At line 15 added 1 line. |
+ ** Dynamic loading of jar files with actors |
Lines 21-22 were replaced by lines 30-34 |
- * GIS actors |
- * Statistical actors |
+ ** semantic annotation/registration for EcoGrid data objects |
+ * Actors -- do we have a complete set? |
+ ** GIS actors -- more needed? |
+ ** Statistical actors |
+ *** Anova, regression, various other stats nicely wrapped up (wrap R?) |
At line 30 added 62 lines. |
+ !!Notes from the kepler breakout meeting on 11/3/04 |
+ *Ontology based browsing and searching |
+ **add functionality to add new actors to the ontology |
+ **add functionality to change ontology views |
+ **add functionality to allow user to put actors into different folders in an ontology |
+ **add functionality to suggest actors that can be used with other actors (SMS) |
+ **add functionality to add ontologies |
+ **change categorization to a connection based cat. |
+ ***choose an actor then the org would change to be only actors that would work with the chosen one. |
+ *3 interfaces |
+ **browse |
+ **search |
+ **contextual reduction of actor base |
+ |
+ *Semantic Mediation |
+ **New EML module to describe processing |
+ |
+ *data management |
+ **search, organize, store data (morphepler) |
+ **query subsystem |
+ **filesystem management ala morpho |
+ ***some see a need to control the file management for controlling ids and file locations |
+ ***could specify a directory(ies) that would hold the data/metadata. the user would then drop objects into that area for the system to process. |
+ ***system could be searched for kzip (or whatever) files and periodically updated. would need a system wide index of kzip files. |
+ **workflow transfer system |
+ ***zip up workflow so it can be easily transfered |
+ ***don't require the use of the ecogrid |
+ ***archive format |
+ ****workflow, data, actors with metadata for each (verifiable-checksum, etc). |
+ |
+ *method descriptions for workflows |
+ **extend EML method |
+ ***OWL in AdditionalMetadata? |
+ **other possible metadata standards |
+ ***OWL? |
+ ***Others? |
+ **ACTION ITEM: need a workflow metadata language for semantically describing |
+ the workflow |
+ |
+ -------------- |
+ !! Meeting notes |
+ |
+ ! Kepler and SMS/KR integration tasks |
+ * Annotate a specific set of data sources and targets |
+ ** start with Matt's Red Spruce integration example (Shawn, Matt, Mark) |
+ ** then work through increasingly complex examples (e.g., Cox's biodiv data) (Shawn, Matt, Mark, others) |
+ * Annotate the "component library" so that we know what functions they perform and how they relate to the "conversion" tasks, and how to model the conversion tasks |
+ ** e.g., given species abundance and area, "can calculate" density, then label the actors that can do this computation with that annotation |
+ * Finish task list for additional features that need to be completed (Matt, Bertram, Shawn, Mark) |
+ |
+ ! Kepler and distributed computation |
+ # "XKepler", ability to disassociate UI event stream from execution engine so that the UI (Vergil) can come and go, and the UI subscribes to UI event streams that might be buffered in a real-time system like ORB, and handles messages like GeoVista does for UI events and coordination |
+ # Distributed computation and grid computing |
+ * Two general approaches to be explored |
+ ** Kepler interface to existing distributed job system like GT/NIMROD/CONDOR |
+ ** New Kepler director that is "distributed aware" and knows how to schedule and invoke remote jobs though one of several services: |
+ *** ptexecute on the remote node |
+ *** a web/grid service on the remote node |
+ *** a managed job on the remote node (cf GT2.x jobs) |
+ ** Peer to peer style sharing of computation resources (possibly another grant) |
+ |
+ |