ConferenceCall 2013 01 24

= OntologySummit2013: Virtual Panel Session-02 - Thu 2013-01-24 =

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"

Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope


 * Session Co-chairs: Dr. ToddSchneider (Raytheon) and Mr. TerryLongstreth (Independent Consultant) - intro slides

Panelists / Briefings:


 * Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "Evaluation Dimensions, A Few"  slides
 * Mr. HansPolzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies"  slides
 * Ms. MaryBalboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle"  slides
 * Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"  slides

Archives:


 * Abstract
 * Agenda
 * Prepared presentation material (slides) can be accessed by clicking on each of the title links below:
 * [ 0-Chair ] . [ 1-Schneider ] . [ 2-Polzer ] . [ 3-Balboni ] . [ 4-Katsumi ]
 * Audio recording of the session ... [ 1:53:11 ; mp3 ; 12.96 MB ]
 * its best that you listen to the session while having the respective presentations (linked above) opened in front of you. You'll be prompted to advance slides by the speaker.
 * transcript of the online chat during the session
 * Additional Resources

Abstract
OntologySummit2013 Session-02: "Extrinsic Aspects of Ontology Evaluation: Finding the Scope" - intro slides

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.

During this OntologySummit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.

As the area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community (panelists and participants alike) to provide input for the dimensions of ontology evaluation during this session, and methodologies that can be applied.

More details about this OntologySummit is available at: OntologySummit2013 (homepage for this summit)

Briefings:

 * Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "A Few Evaluation Dimensions"  slides
 * Abstract: ... The area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community to provide input for the dimensions of ontology evaluation.


 * Mr. HansPolzer (Lockheed Martin (ret.)) - "Dimensionality of Evaluation Context for Ontologies"  slides
 * Abstract: ... Evaluation of anything, including ontologies, is done for some purpose within some context. Often much of that purpose and context is left implicit because it is assumed to be shared among the participants in the evaluation process. However, as the number and scope of the things being evaluated grows, and as the contexts in which they are evaluated become more diverse, implicit purpose and context dimensions become problematic. The appropriateness of any given set of evaluation attributes and their valuation depends significantly on evaluation purpose and context. This presentation draws on some past experiences with evaluation context issues in related domains to motivate attention to more explicit representation of evaluation context in ontology evaluation. It also suggests some important evaluation context dimensions for consideration by the ontology community as a starting point for further exploration and refinement by the community.


 * Ms. MaryBalboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle"  slides
 * Abstract: ... One may approach ontology evaluation as testing in a Black Box paradigm (i.e., the ontology exists within said Black Box). What are some basic Black Box testing methods and applications in the Lifecycle? Can testing a large database be akin to testing an ontology? What are some interesting data points regarding large database Black Box testing, especially if they can relate to ontology testing? Are Security concerns already covered by Black Box testing? This paper broaches these subjects from an engineering point of view, to provoke thoughts and ideas on Black Box testing of a system that may include an ontology.


 * Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"  slides
 * Abstract: ... The design and evaluation of first-order logic ontologies pose multiple challenges. If we consider the ontology lifecycle, two issues of critical importance are the specification of the intended models for the ontologys concepts (requirements) and the relationship between these models and the models of the ontologys axioms (verification). This talk presents a methodology in which automated reasoning plays a critical role for the development and verification of first-order logic ontologies.  Its focus is on the verification of requirements (intended models), and how the results of this evaluation can be used to both revise the requirements and correct errors in the ontology.  The methodology will be illustrated using examples from the Boxworld ontology (available in COLORE).  While it is focused on the challenges of the development of first-order logic ontologies, this methodology may also be useful for ontology development in other logical languages.

Agenda
OntologySummit2013 - Panel Session-02


 * Session Format: this is a virtual session conducted over an augmented conference call


 * 1. Opening (co-chair) - ToddSchneider / TerryLongstreth [10 min.] ... [ slides ]
 * 2. Panel briefings - ToddSchneider, HansPolzer, MaryBalboni, MeganKatsumi - [~20 min. each]
 * 3. Q & A and open discussion [All: ~30 min.] -- please refer to process above
 * 4. Wrap-up / Announcements (co-chair) - TerryLongstreth / ToddSchneider [5 min.]

Proceedings:
Please refer to the above

IM Chat Transcript captured during the session:

see raw transcript here.

(for better clarity, the version below is a re-organized and lightly edited chat-transcript.) Participants are welcome to make light edits to their own contributions as they see fit.

-- begin in-session chat-transcript --

[09:03] PeterYim: Welcome to the

= OntologySummit2013: Virtual Panel Session-02 - Thu 2013-01-24 =

Summit Theme: Ontology Evaluation Across the Ontology Lifecycle


 * Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope


 * Session Co-chairs: Dr. ToddSchneider (Raytheon) and Mr. TerryLongstreth (Independent Consultant)

Panelists / Briefings:


 * Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "Evaluation Dimensions, A Few"
 * Mr. HansPolzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies"
 * Ms. MaryBalboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle"
 * Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"

Logistics:


 * Refer to details on session page at: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24


 * (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)


 * Mute control: *7 to un-mute ... *6 to mute

if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.
 * Can't find Skype Dial pad?
 * for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
 * for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)

Attendees: ToddSchneider (co-chair), TerryLongstreth (co-chair), AlanRector, AnatolyLevenchuk, AngelaLocoro, BobSchloss, BobbinTeegarden, CarmenChui, DaliaVaranka, DonghuanTang, FabianNeuhaus, FranLightsom, FrankOlken, GaryBergCross, HansPolzer, JackRing, JoelBender, JohnBilmanis, KenBaclawski, LalehJalali, LeoObrst, MariCarmenSuarezFigueroa, MaryBalboni, MatthewWest, MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, OliverKutz, PavithraKenjige, QaisAlKhazraji, PeterYim, RamSriram, RichardMartin, RosarioUcedaSosa, SteveRay, TillMossakowski, TorstenHahmann, TrishWhetzel

Proceedings:
. [08:57] anonymous morphed into Donghuan

[09:13] Donghuan morphed into PennState :Qais

[09:14] PennState :Qais morphed into PennState

[09:17] anonymous1 morphed into MaxPetrenko

[09:21] anonymous2 morphed into MaryBalboni

[09:23] anonymous1 morphed into CarmenChui

[09:23] anonymous1 morphed into FabianNeuhaus

[09:24] PennState morphed into Donghuan

[09:24] Donghuan morphed into Qais

[09:24] Qais morphed into PennState

[09:26] PennState morphed into Qais_Donghuan

[09:24] anonymous morphed into Angela Locoro

[09:25] Angela Locoro morphed into AngelaLocoro

[09:26] anonymous morphed into JohnBilmanis

[09:26] anonymous morphed into SteveRay

[09:27] MichaelGruninger morphed into MeganKatsumi

[09:29] MatthewWest: Just a note, but the Session page shows the conference starting at 1630 UTC when it is actually 1730 UTC.

[09:55] PeterYim: @MatthewWest - thank you for the prompt ... sorry, everyone, the session start-time should be: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC

[09:30] anonymous morphed into RosarioUcedaSosa

[09:31] anonymous morphed into RamSriram

[09:33] anonymous1 morphed into TorstenHahmann

[09:55] anonymous morphed into laleh

[09:59] PeterYim: @laleh - would be kindly provide your real name (in WikiWord format, if you please) and morph into the with "Settings" (botton at top center of window)

[10:01] laleh morphed into LalehJalali

[10:03] PeterYim: @LalehJalali - thank you, welcome to the session ... are you one of RameshJain's students at UCI?

[10:08] LalehJalali: Yes

[09:34] PeterYim: == [0-Chair] ToddSchneider & TerryLongstreth (co-chairs) opening the session ...

[09:37] anonymous morphed into FrankOlken

[09:39] PeterYim: == [2-Polzer] HansPolzer presenting ...

[09:40] List of members: AlanRector, AnatolyLevenchuk, AngelaLocoro, BobbinTeegarden, BobSchloss, CarmenChui, DaliaVaranka, FabianNeuhaus, FrankOlken, FranLightsom, HansPolzer, JoelBender, JohnBilmanis, KenBaclawski, LeoObrst, MariCarmenSuarezFigueroa, MaryBalboni, MatthewWest, MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, OliverKutz, PeterYim, Qais_Donghuan, RamSriram, RichardMartin, RosarioUcedaSosa, SteveRay, TerryLongstreth, ToddSchneider, TorstenHahmann, vnc2

[09:42] anonymous morphed into TrishWhetzel

[09:44] MikeRiben: are we on slide 5?

[09:47] JackRing: Pls stop using "Next Slide" and say number of slide

[09:47] anonymous morphed into GaryBergCross

[09:52] ToddSchneider: Jack, Hans is on slide 7.

[09:45] JackRing: Is your Evaluation Context different from. Ontology Context?

[09:56] ToddSchneider: Qais, if you have a question would type it in the chat box?

[09:56] PeterYim: @Qais_Donghuan - we will hold questions off till after the presentations are done, please post your questions on the chat-space (as a placeholder/reminder) for now

[09:55] TerryLongstreth: On slide 8, Hans mentions reasoners as an aspect of the ontology, but as Uschold has pointed out, the reasoner may be used as a test/evaluation tool

[09:57] ToddSchneider: Terry, the evaluation(s) may need to be redone if the reasoner is changed.

[10:03] TerryLongstreth: Sure. I was just pointing out that the reasoner may be a tool for extrinsic evaluation.

[10:04] ToddSchneider: Terry, yes a tool used in evaluation and the subject of evaluation itself (e.g., performance).

[10:08] SteveRay: @Hans: It would help if you could provide some concrete examples that would bring your observations into focus.

[10:10] MichaelGruninger: @Hans: In what sense is ontology compatibility considered to be a rating?

[10:09] PeterYim: == [1-Schneider] ToddSchneider presenting, and soliciting input on Ontology Evaluation dimensions ...

[10:01] JackRing: (ref. ToddSchneider's solicitation for input on dimensions) Reusefulness of an ontology or subset(s) thereof?

[10:08] JackRing: This is a good start toward an ontology of ontology evaluation but we have a loooong way to go.

[10:10] anonymous morphed into PavithraKenjige

[10:15] JackRing: In systems think the three basic dimensions are Quality, Parsimony, Beauty

[10:15] ToddSchneider: The URL for adding to the list of possible evaluation dimensions is http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013_Extrinsic_Aspects_Of_Ontology_Evaluation_CommunityInput

[10:15] MariCarmenSuarezFigueroa: In the legal part, maybe we should consider also license (and not only copyright)

[10:15] TerryLongstreth: Thanks Mari Carmen

[10:16] FabianNeuhaus: @Todd, we need more than a list. We need definitions of the terms on your evaluation dimensions list, because they are not self-explanatory.

[10:16] MatthewWest: Relevance, Clarity, Consistency, Accessibility, timeliness,completeness, accuracy, costs (development, maintenance), Benefits

[10:17] MatthewWest: Provenance

[10:17] ToddSchneider: Fabian, yes we will need definitions, context, and possibly intent. But first I'd like to conduct a simple gathering exercise.

[10:18] MatthewWest: Modularity

[10:17] FabianNeuhaus: @Todd: it seems that your "evaluation dimensions" are very different from Hans' dimensions.

[10:20] ToddSchneider: Fabian, yes. Hans was talking about context. I'm thinking of things more directly related to evaluation criteria. Both Hans and I like metaphors from physics.

[10:48] LeoObrst: @Todd: your second set of slides, re: slide 4: Precision, Recall, Coverage, Correctness and perhaps others will also be important for Track A Intrinsic Aspects of Ontology Evaluation. Perhaps your metrics will be: Precision With_Respect_To(domain D, requirement R), etc.? Just a thought.

[10:21] PeterYim: == [3-Balboni] MaryBalboni presenting ...

[10:21] TerryLongstreth: Mary's term: CSCI - Computer Software Configuration Item - smallest unit of testing at some level (varies by customer: sometimes a module, sometimes a capability ...)

[10:23] TerryLongstreth: Current speaker - MaryBalboni - slides 3-Balboni

[10:27] BobbinTeegarden: @Mary, slide 4 testing continuum -- may need to go one more step: critical testing' is in actual usage (step beyond beta) and that feedback loop that creates continual improvement. Might want to extend the thinking to 'usage as a test' and ongoing criteria in field usage?

[10:29] TerryLongstreth: @Bobbin - good point and note that in many cases, evaluation may not start until (years?) after the ontology has been put into continuous usage

[10:29] TillMossakowski: how does it work that injection of bugs leads to finding more (real) bugs? Just because there is more overall debugging effort?

[10:30] FabianNeuhaus: @Till: I think it allows you to evaluate the coverage of your tests.

[10:33] JackRing: It seems that your testing is focused on finding bugs as contrasted to discovering dynamic and integrity limits. Instead of "supports system conditions" it should be 'discovers how ontology limits system envelope"

[10:35] JackRing: Once we understand how to examine a model for progress properties and integrity properties we no longer need to run a bunch of tests to determine ontology efficacy.

[10:29] SteveRay: @Mary: Some of your testing examples look more like what we would call intrinsic evaluation. Specifically I'm thinking of your example of finding injected bugs.

[10:59] MaryBalboni: @SteveRay: Injected bugs - yes it is intrinsic to those that inject the defects, but would be extrinsic to the testers that are discovering defects ...

[11:01] SteveRay: @Mary: I would agree with you provided that the testers are testing via blackbox methods such as performance given certain inputs, and not by examining the code for logical or structural bugs. Are we on the same page?

[11:03] MaryBalboni: @SteveRay - absolutely!

[10:49] BobbinTeegarden: @JackRing Would 'effectiveness' fall under beauty? What criteria?

[10:58] JackRing: @Bobbin, Effect-iveness is a Quality factor. Beauty is in the eye of the beer-holder.

[10:37] TerryLongstreth: Example of business rule: ask bank for email when account drops below $200. Evaluate by cashing checks until balance below threshold.

[10:36] ToddSchneider: Leo, have you cloned yourself?

[10:37] LeoObrst: No, I had to reboot firebox and it had some fun.

[10:41] JackRing: No one has mentioned the dimension of complexness. Because ontologies quickly become complex topologies then the response time becomes very important if implemented on a von Neumann architecture. Therefore the structure of the ontology for efficiency of response becomes an important dimension.

[10:42] BobbinTeegarden: At DEC, we used an overlay on all engineering for RAMPSS -- Reliability, Availability, Maintainability, Performance, Scalability, and Security. Maybe these all apply for black box here? Mary has cited some of them...

[10:56] MaryBalboni: @BobbinTeegarden: re ongoing criteria in field usage - yes during what we call sustainment after delivery, upgrades are sent out acceptance tests are repeated and depending on how much is changed, the testing may only be regression of specific areas in the system..

[10:43] LeoObrst: @MaryBalboni: re: slide 14: back in the day, we would characterize 3 kinds of integrity: 1) domain integrity (think value domains in a column, i.e., char, int, etc.), 2) referential integrity (key relationships: primary/foreign), 3) semantic integrity (now called business rules). Ontologies do have these issues. On the ontology side, they can be handled slightly differently: e.g., referential integrity (really mostly structural integrity) will be handled differently based on Open World Assumption (e.g., in OWL) or Closed World Assumption (e.g., in Prolog), with the latter being enforced in general by integrity constraints.

[10:52] MaryBalboni: @LeoObrst - thanks for feedback - since I am not an expert in Ontology it is very nice to see that these testing paradigms are reusable - and tailorable.

[10:44] PeterYim: == [4-Katsumi] MeganKatsumi presenting ...

[10:53] LeoObrst: @Megan: NicolaGuarino for our upcoming (Mar. 7, 2013) Track A session will talk along the lines of your slides 8, etc.

[10:52] TillMossakowski: Is it always clear what the intended models are? After all, initially you will have only an informal understanding of the domain, which will be refined during the process of formalisation. Only in this process, the class of intended models becomes clearer.

[10:54] MichaelGruninger: @Till: At any point in development, we are working with a specific set of intended models, which is why we call this verification. Validation is addressing the question of whether or not we have the right set of intended models.

[10:56] MichaelGruninger: We formalize the ontology's requirements as the set of intended models (or indirectly as a set of competency questions). It might not always be clear what the intended models are, but this is analogous to the case in software development when we are not clear as to what the requirements are.

[10:56] TillMossakowski: @Michael: OK, that is similar as in software validation and verification. But then validation should be mentioned, too.

[10:56] ToddSchneider: Michael, so there's a presumption that you have extensive explicit knowledge of the intended model(s), correct?

[10:58] MichaelGruninger: @Todd: since intended models are the formalization of the requirements, extensive explicit knowledge of intended models is equivalent to "extensive explicit knowledge about the requirements"

[10:57] LeoObrst: @Till, Michael: one issue is the mapping of the "conceptualization" to the intended models, right? I guess Michael's requirements are in affect statements/notions of the conceptualization. Is that right?

[10:59] MichaelGruninger: @LeoObrst: I suppose there could be the case where someone incorrectly specified the intended models or competency questions that formalize a particular requirement (i.e. the conceptualization is wrong)

[10:59] TillMossakowski: It seems that two axiomatisations (requirements and design) are compared with each other. The requirements describe the intended models. Is this correct?

[11:00] MichaelGruninger: @Till: We would say that the intended models describe the requirements.

[11:01] MichaelGruninger: @Till: The notion of comparing axiomatizations arises primarily when we use the models of some other ontology as a way of formalizing the intended models of the ontology we are evaluating

[11:02] TillMossakowski: @Michael: but you cannot give the set of intended models to a prover, only an axiomatisation of it. Hence it seems that you are testing two different axiomaisations against each other.

[11:00] ToddSchneider: All, due to a changing schedule I need to leave this session early. Cheers.

[11:02] MariCarmenSuarezFigueroa: We could also consider the verification of requirements (competency questions) using e.g. SPARQL queries.

[11:04] PeterYim: @MeganKatsumi - ref. your slide#4 ... would you see some "fine tuning" after the ontology has been committed to "Application" - adjustment to the "Requirements" and "Design" possibly?

[11:06] TerryLongstreth: Fabian suggests that Megan's characterization of semantic correctness is too strong...

[11:09] MichaelGruninger: @Till: Yes, when we use theorem proving, we need to use the axiomatization of another theory. However, there are also cases in which we verify an ontology directly in the metatheory. In terms of COLORE, we need to use this latter approach for the core ontologies.

[11:10] TorstenHahmann: @Till: but you can give individual models to a theorem prover. It is a question how to come up with a good set of models to evaluate the axiomatization.

[11:11] TillMossakowski: OK, but this probably means that you have a set of intended models that is more exemplary than exhaustive.

[11:11] FabianNeuhaus: @Till, Michael. It seems to me that Till has a good point. Especially if the ontology and the set of axioms that express the requirements both have exactly the same models, it seems that you just have two equivalent axiom sets (ontologies)

[11:12] TorstenHahmann: Yes, of course, the same as with software verification.

[11:12] TillMossakowski: indeed, but sometimes it might just be an implication

[11:15] TillMossakowski: further dimensions: consistency; correctness w.r.t. intended models (as in Megan's talk), completeness in the sense of having intended logical consequences

[11:16] MeganKatsumi: @Leo: I'm not sure that I understand your question, can you give an example?

[11:03] LeoObrst: @Megan: what if you have 2 or more requirements, e.g., going from a 2-D to a 3-D or 4-D world?

[11:17] PeterYim: == Q&A and Open Discussion ... soliciting of additional thoughts on Evaluation Dimensions

[11:17] BobbinTeegarden: It seems we have covered correctness, precision, meeting requirements, etc well, but have we really addressed 'goodness' of an ontology? And certainly haven't addressed an elegant' ontology, or do we care? Is this akin to Jack's 'beauty' assessment?

[11:17] BobSchloss: Because of the analogy we heard with Database Security Blackbox Assessment, I wonder if there is an analogy to "normalization" (nth normal form) for database schemas. Is some evaluation criteria related to factoring, simplicity, minimalism, straightforwardness.....

[11:19] TorstenHahmann: another requirement that I think hasn't been mentioned yet: granularity (level of detail)

[11:21] LeoObrst: @Torsten: yes, that was my question, i.e., granularity.

[11:22] TorstenHahmann: @Leo: I thought so.

[11:22] MariCarmenSuarezFigueroa: I'm also think granularity is a very important dimension....

[11:19] BobSchloss: I am also thinking about issues of granularity and regularity ... If a program wants to remove one instance "entity" from a knowledge base, does this ontology make it very simple to just do the remove/delete, or is it so interconnected that removal requires a much more complicated syntax....

[11:24] BobSchloss: Although this is driven by the domain, some indication of an ontology's rate of evolution or degree of stability or expected rate of change may be important to those using organizations. If there are 2 ontologies, and one, by being very simple and universal, doesn't have as many specifics but will be stable for decades; whereas another, because it is very detailed using concepts that are related to current technologies, current business practices, and therefore may need to be updated every year or two... I'd like to know this.

[11:29] MatthewWest: Yes, stability is an important criteria. For me that is about how much the existing ontology needs to change when you need to make an addition.

[11:24] MariCarmenSuarezFigueroa: Sorry I have to go (due to another commitment). Thank you very much for the interesting presentations. Best Regards

[11:28] BobSchloss: Another analogy to the world of blackbox testing... the software engineers have ideas of Orthogonal Defect Classification and more generally, ways of estimating how many remaining bugs there are in some software based on the rates and kinds of discovery of new bugs that have happened over time up until the present moment. I wonder if there is something for an ontology... one that has a constant level of utilization, but which is having a decrease in reporting of errors.... can we guess how many other errors remain in the ontology? Again... this is an analogy.... some way of estimating "quality"...

[11:27] MichaelGruninger: @Fabian: It would be great if we could also focus on criteria and techniques that people are already using in practice with real ontologies and applications.

[11:27] SteveRay: @Michael: +1

[11:28] FabianNeuhaus: @michael +1

[11:29] LeoObrst: Perhaps the main difference between Intrinsic -> Extrinsic is that at least some of the Intrinsic predicates are also Extrinsic predicates with additional arguments, e.g., Domain, Requirement, etc.?

[11:30] LeoObrst: Must go, thanks, all!

[11:31] PeterYim: wonderful session ... really good talks ... thanks everyone!

[11:31] PeterYim: -- session ended: 11:30 am PST --

[11:31] List of attendees: AlanRector, AnatolyLevenchuk, AngelaLocoro, BobSchloss, BobbinTeegarden, CarmenChui, DaliaVaranka, DonghuanTang, FabianNeuhaus, FranLightsom, FrankOlken, GaryBergCross, JackRing, JoelBender, JohnBilmanis, KenBaclawski, LalehJalali, LeoObrst, MariCarmenSuarezFigueroa, MaryBalboni, MatthewWest, MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, OliverKutz, PavithraKenjige, QaisAlKhazraji, PeterYim, RamSriram, RichardMartin, RosarioUcedaSosa, SteveRay, TerryLongstreth, TillMossakowski, ToddSchneider, TorstenHahmann, TrishWhetzel, vnc2

-- end of in-session chat-transcript --


 * Further Question & Remarks - please post them to the [ ontology-summit ] listserv
 * all subscribers to the previous summit discussion, and all who responded to today's call will automatically be subscribed to the [ ontology-summit ] listserv
 * if you are already subscribed, post to 
 * (if you are not yet subscribed) you may subscribe yourself to the [ ontology-summit ] listserv, by sending a blank email to  from your subscribing email address, and then follow the instructions you receive back from the mailing list system.
 * (in case you aren't already a member) you may also want to join the ONTOLOG community and be subscribed to the [ ontolog-forum ] listserv, when general ontology-related topics (not specific to this year's Summit theme) are discussed. Please refer to Ontolog membership details at: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 * kindly email  if you have any question.

Additional Resources:

 * Homepage of OntologySummit2013
 * Proceedings from the Ontology Summit 2013 Launch Event (2013.01.17) - ConferenceCall_2013_01_17
 * [ontology-summit] mailing list archives - http://ontolog.cim3.net/forum/ontology-summit/
 * to subscribe to this discussion list: send a blank message from your subscribing email address to  or visit http://ontolog.cim3.net/mailman/listinfo/ontology-summit/ and subscribe yourself there
 * Ontology Summit 2013 Community Library - http://www.zotero.org/groups/ontologysummit2013
 * Homepage of the Summit - see: OntologySummit

For the record ...

How To Join (while the session is in progress)

 * 1. Dial in with a phone or from skype: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24#nid3L1D
 * 2. Open chat-workspace in a new browser window: http://webconf.soaphub.org/conf/room/summit_20130124
 * 3. Download presentations for each speaker here: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24#nid3L12
 * or, 3.1 optionally, access our shared-screen vnc server, if you are not behind a corporate firewall

Conference Call Details

 * Date: Thursday, 24-Jan-2013
 * Start Time: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 UTC
 * ref: World Clock
 * Expected Call Duration: ~2.0 hours


 * Dial-in:
 * Phone (US): +1 (206) 402-0100 ... (long distance cost may apply)
 * ... [ backup nbr: (415) 671-4335 ]
 * when prompted enter Conference ID: 141184#
 * Skype: joinconference (i.e. make a skype call to the contact with skypeID="joinconference") ... (generally free-of-charge, when connecting from your computer)
 * when prompted enter Conference ID: 141184#
 * Unfamiliar with how to do this on Skype? ...
 * Add the contact "joinconference" to your skype contact list first. To participate in the teleconference, make a skype call to "joinconference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 141184# when prompted.
 * Can't find Skype Dial pad? ...
 * for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"
 * for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)


 * Shared-screen support (VNC session), if applicable, will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/
 * view-only password: "ontolog"
 * if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.
 * people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides above (where applicable) and running them locally. The speaker(s) will prompt you to advance the slides during the talk.


 * In-session chat-room url: http://webconf.soaphub.org/conf/room/summit_20130124
 * instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").
 * You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.
 * thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20130124@soaphub.org ... Handy for mobile devices!


 * Discussions and Q & A:
 * Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.
 * To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)
 * we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)
 * During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)


 * Please review our Virtual Session Tips and Ground Rules - see: VirtualSpeakerSessionTips


 * RSVP  to [mailto:peter.yim@cim3.com peter.yim@cim3.com] with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)


 * This session, like all other Ontolog events, is open to the public. Information relating to this session is shared on this wiki page: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24


 * Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.

Attendees

 * Attended:
 * AlanRector
 * AnatolyLevenchuk
 * AngelaLocoro
 * BobbinTeegarden
 * BobSchloss
 * CarmenChui
 * DaliaVaranka
 * DavidHay
 * DougToppin
 * FabianNeuhaus
 * FrankOlken
 * FranLightsom
 * GaryBergCross
 * HansPolzer
 * JackRing
 * JoelBender
 * JohnBilmanis
 * KenBaclawski
 * LalehJalali
 * LeoObrst
 * MariaCopeland
 * MariCarmenSuarezFigueroa
 * MaryBalboni
 * MatthewWest
 * MaxPetrenko
 * MeganKatsumi
 * MichaelGruninger
 * MikeDean
 * MikeRiben
 * NathalieAussenacGilles
 * OliverKutz
 * PavithraKenjige
 * PeterYim
 * DonghuanTang
 * QaisAlKhazraji
 * RamSriram
 * RichardMartin
 * RosarioUcedaSosa
 * SteveRay
 * ThanhVanTran
 * TerryLongstreth (co-chair)
 * TillMossakowski
 * ToddSchneider (co-chair)
 * TorstenHahmann
 * TrishWhetzel
 * VictorAgroskin


 * Expecting: (registered attendees)
 * ToddSchneider (co-chair)
 * TerryLongstreth (co-chair)
 * HansPolzer
 * MaryBalboni
 * MeganKatsumi
 * MichaelGruninger
 * MatthewWest (joining late)
 * RamSriram
 * MikeDean
 * AmandaVizedom
 * FabianNeuhaus
 * LeoObrst
 * SteveRay
 * MikeBennett
 * JoanneLuciano
 * PeterYim
 * PavithraKenjige
 * DaliaVaranka
 * DavidHay
 * CarmenChui
 * DonghuanTang
 * QaisAlKhazraji
 * MariCarmenSuarezFigueroa
 * GaryBergCross
 * TorstenHahmann
 * FranLightsom
 * BobbinTeegarden
 * FrankOlken
 * please add yourself to the list if you are a member of the Ontolog or OntologySummit community, or, rsvp to  with your affiliation.
 * please add yourself to the list if you are a member of the Ontolog or OntologySummit community, or, rsvp to  with your affiliation.
 * please add yourself to the list if you are a member of the Ontolog or OntologySummit community, or, rsvp to  with your affiliation.


 * Regrets:
 * MarcelaVegetti
 * HassanAitKaci
 * AmandaVizedom (very much wish I could make this one especially; will eagerly review slides & transcript after)
 * ElieAbiLahoud