by Ali Hashemi
Spurred by a post in the ontology forum by Peter Yim, which pointed to this talk by Martin Hepp at the EKAW 2012, I thought I'd share some observations about Martin’s very interesting and engaging presentation. The ensuing discussion on the Ontolog Forum is also quite interesting and worth exploring.
From Ontologies to Web Ontologies: Lessons Learned from Conceptual Modeling for the WWW from Martin Hepp on Vimeo.
I'd like to begin by noting that the conclusion about web ontologies seems to be scoped to not just ontologies that are used over the web, but
ontologies that are used over the web, and adopted on a mass-scale, and most
importantly, ontologies that are not just adopted, but by virtue of their
adoption require adherence and implementation on the part of the user.
Firstly, I would suggest that the assumed definition of “web
ontology” does not cover all (or perhaps even most) use cases for ontologies
(over the web). As a snapshot of the current state of affairs, it may be more
accurate, but I think it is misleading to conclude that this would always be
the case - or is in fact, the most useful stance for the short term development
of ontologies on the web.
But let us grant that this assessment holds, Martin’s
inference (that we should focus on lightweight ontologies) then rests on a
socio-cultural presupposition: That most of what formal ontologists do is just
too complicated for lay-people to implement, let alone master.
Which by the way, I think we are all mostly in agreement. On this
point, I would simply make one other observation – a few hundred years ago, a lot of what we take granted as basic calculus was limited to perhaps a few, highly educated specialists, and a few hundred years
before, most of mathematics was limited to a small segment of the population. I would posit that this current state of affairs might have more to do with our current curricula
and peda/androgogical practices than anything innately difficult in the subject matter.* Though of course, abstraction has been shown to be one of the most difficult things for children to master.
The other point I’d like to highlight is whether the
use-case which serves as the basis his talk characterizes most web-ontologies.
In the context of the semantic web – perhaps yes – but, at least in my mind, I
see a future where ontologies are generally offered as services. This view
actually complements one of Martin’s conclusion (that the typing of entities is
important), but it diverges from his other conclusion (that web-ontologies
should primarily be lightweight).
Usually, someone choreographing a service or building a system by stitching together multiple libraries would consult the appropriate documentation (i.e. javadoc), and see what the expected behavior of the method or service is. In the
case of ontologies, we don’t necessarily have a well-developed or understood
documentation environment or paradigm. I would further suggest that the very
light-weightness of ontology (i.e. pushing most of the semantics external to
the actual representation and into conventional interpretation of the terms in
question) actually makes this problem more difficult.
At this point though, it’s important to delineate two
things. First, there is of course an important role for light-weight ontologies
that are intended to accommodate a wide range of interpretations. On the other
hand, more sophisticated implementations (imagine consolidating the results of
multiple sensor nets, or databases for the medical domain) may require
finer-grained assumptions (i.e. axioms) about the types in question.
Communicating the set of intended models, especially exclusively via a
set of axioms can be quite cumbersome, especially given the current general-population
understanding and facility with formal logic. I would also posit that example implementations do not suffice either. However, many peoples in a given
domain are quite familiar with models of
the entities in their domain, and would often be able to discern amongst models
that satisfy axiomatizations of the same type.
In my view, this area represents one of the least researched
though useful components of the study of ontology, especially as it relates to
mass adoption. First, I’d like to posit that the field of ontology would
benefit tremendously by including with any set of axioms, sets of intended
example implementations. Indeed, schema.org does
this to some degree, and Martin’s own GoodRelations also provides example implementations.
But I’d like to also point out that this approach (of
providing concrete, satisfying models – i.e. examples) can be extended even
further. On this list, we often discuss the difference between Tarskian (“mathematical,
structural models”) and the more lay interpretation of model. I humbly believe
that there is an entire sub-discipline of ontology that has barely been
researched and may just provide the requisite added push to reach the mass adoption of
ontologies.
Namely, I think it is possible to begin bridging the divide
between the more informal models that are used widely, and the Tarskian models.
The former models are often (though not exclusively) pictorial representations
of the universe of discourse, with conventional interpretations of the represented
abstractions.
In many fields, it is possible to make a mapping from said model representations to a Tarskian model, which in turn become a set of literal statements about types in the domain in question (see my Master’s thesis for an outline of this approach).
In many fields, it is possible to make a mapping from said model representations to a Tarskian model, which in turn become a set of literal statements about types in the domain in question (see my Master’s thesis for an outline of this approach).
What I suggest would help bridge this gap between complex
axiomatizations and things the lay-person can actually understand and
implement, are sets of models using the conventions of the domain in question,
to illustrate the axiomatization of the domain. In the same way that most
people don’t first learn of triangles through a formal definition (i.e. a
polygon that has three sides adding up to 180 degrees or whatnot), but rather through
a set of models, the same holds for what may be considered “complex” or
heavyweight ontologies.
To summarize, I don’t disagree with Martin’s talk, though I
would take issue with reducing the space of web-ontologies to the use case he
elaborates. Further, I think one bottleneck to mass adoption in the field of computational
ontology is not that it is too arcane or complex (which imo, a much needed
curricular overhaul would go a long way in addressing), but that it suffers
from deficient documentation. And to this point, I would posit that there is an
entire branch or study of ontology which has thus far been largely neglected
(and sketched out a bit in [1]) which would go a long way in making ontologies
accessible to a wider audience, given their current “semantic awareness/maturity”.
People have facility with models, and it is usually possible
to create robust, repeatable mappings between “lay” models and Tarskian models,
which can provide a useful intuitive description of a particular set of axioms
(or ontology).
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.