At line 11 added 2 lines. |
+ |
+ |
Line 41 was replaced by line 43 |
- !Notes from the kepler breakout meeting on 11/3/04 |
+ !!Notes from the kepler breakout meeting on 11/3/04 |
Lines 84-85 were replaced by lines 86-103 |
- * Annotate a specific set of data sources and targets (probably from Matt's fake example) |
- ** |
+ * Annotate a specific set of data sources and targets |
+ ** start with Matt's Red Spruce integration example (Shawn, Matt, Mark) |
+ ** then work through increasingly complex examples (e.g., Cox's biodiv data) (Shawn, Matt, Mark, others) |
+ * Annotate the "component library" so that we know what functions they perform and how they relate to the "conversion" tasks, and how to model the conversion tasks |
+ ** e.g., given species abundance and area, "can calculate" density, then label the actors that can do this computation with that annotation |
+ * Finish task list for additional features that need to be completed (Matt, Bertram, Shawn, Mark) |
+ |
+ ! Kepler and distributed computation |
+ # "XKepler", ability to disassociate UI event stream from execution engine so that the UI (Vergil) can come and go, and the UI subscribes to UI event streams that might be buffered in a real-time system like ORB, and handles messages like GeoVista does for UI events and coordination |
+ # Distributed computation and grid computing |
+ * Two general approaches to be explored |
+ ** Kepler interface to existing distributed job system like GT/NIMROD/CONDOR |
+ ** New Kepler director that is "distributed aware" and knows how to schedule and invoke remote jobs though one of several services: |
+ *** ptexecute on the remote node |
+ *** a web/grid service on the remote node |
+ *** a managed job on the remote node (cf GT2.x jobs) |
+ ** Peer to peer style sharing of computation resources (possibly another grant) |
+ |