SemApps - Creating Semantic Applications

Hi. I'm Mark Wallace. In my role as a an Ontologist and Software Architect, I am continually working with new and fun semantic technologies. Be it RDF/OWL, Triple-stores, Semantic Wikis, or Text Extraction, I am learning more all the time and want to share my experiences in hopes of helping others along with these technologies. I hope post a new article every month or two, so check back in every so often to see what’s cooking!

Wednesday, April 17, 2024

A working triple store in about 5 minutes

There are many great triples stores out there. I am not endorsing any particular one. However, if you are wanting a quick way to start playing around with triples and SPARQL, I've found RDF4J to be one of the fastest ways to get started with the RDF/SPARQL stack.

This example is on Windows 10. But setting up on unix should be similar.

This also assumes that you have JAVA installed. As of this writing, Java 11 is the latest version.

Set Up a Tomcat Web Server

Google for Tomcat download. Download tomcat and unpack to a folder, e.g. C:\Programs (Downloads for me took about 1 minute total.)

Deploy the RDF4J Webapps in Tomcat

Google for RDF4J download. Download the ZIP file. Open the ZIP and find the war folder.
Copy rdf4j-server.war and rdf4j-workbench.war into tomcat's webapps folder.

Run Tomcat

Start tomcat using its bin folder's startup.bat script. (Wait till console shows Server startup in ... ms)

Start using your triple store!

In a browser, go to http://localhost:8080/rdf4j-workbench

Congratulations, you have a working triple store! (If you don't, reply/comment and tell me what you're seeing.)

Create a new repository

RDF4J lets you have multiple "repositories" which are basically independent triple stores within the RDF4J server. Let's set up one.

On the workbench user interface, click New Repository and create a repository, give it an id like test, a title like a test repo and click Next then Create.

Put in some data

On the workbench user interface, Click SPARQL Update Paste this in:

PREFIX : <http://example/>
INSERT DATA {
  :Mark a :Person .
  :Judi a :Person .
  :Linda a :Person .
  :Mark :knows :Judi .
  :Judi :knows :Linda .
  :Mark :knows :Linda .
}

Click Execute

You can also load data by uploading files.

Explore the data

  • Click Types to see that it figured out that <http://example/Person> is a type.
  • Click on <http://example/Person> to see the instances of this type.
  • Click on <http://example/Judi> to all the triples with <http://example/Judi> as the subject or object.

Add namespaces to make things less verbose

  • Click Namespaces.
  • Enter test in prefix box, and http://example/ in Namespace box
  • Click Update.

Now explore data again and see that it uses the test: namespace prefix rather than the full URI when showing data, e.g. test:Mark and test:knows.

SPARQL Query your data

Click Query Enter this:

select * where {?x a ?y} limit 10

Click Execute.

Beyond 5 minutes...

Want to query as a SPARQL Endpoint?

Use URL of form below, where test is the repository id you specified. The part after query= must be URL encoded. e.g.,

curl  http://localhost:8080/rdf4j-server/repositories/test?query=select%20%2A%20%7B%3Fx%20a%20%3Fy%7D

Want to load your triple store over a REST API?

Use URL of form below to push statements.

curl --data @mydata.ttl -H "Content-Type: application/x-turtle" http://localhost:8080/rdf4j-server/repositories/test/statements 

You can store to a named graph by adding a ?context parameter to the URL, e.g.,

curl -v --data @mydata.ttl -H "Content-Type: application/x-turtle" \
http://localhost:8080/rdf4j-server/repositories/test/statements?context=%3Chttp://graph1%3E

More realistic dataset

Try with more triples, and multiple named graphs.

Approach: 

  • Use LUBM data set (e.g. 1 University)
  • Put ontology into 1 named graph, data for Univ in another.
  • Query against joined graphs

RDF4J: 

  • Create repo named lubm1 
  • For ontology:
    • Click Add
    • Put in RDF Data URL of  http://swat.cse.lehigh.edu/onto/univ-bench.owl 
    • Select Data format of RDF/XML
    • Click Upload
  • For data:
    • make a zip of all the University*.owl files from LUBM 1 University data set
    • click Add and choose the zip file you made
    • set Base URI to http://www.University0.edu (which will make graph name <http://www.University0.edu>)
    • Select Data format of RDF/XML
    • Click Upload

Now you will have 2 graphs: 1 with TBox and 1 with University0 ABox data.

Inferred triples land in the default graph. 

Friday, February 22, 2019

Simplest Python/Flask + React-JS App

Motivation

I wanted to create the simplest Python/React server/client app I could.

There are many quick ways to create a React app, including create-react-app documented here. This is probably best for production, but it does create a lot of bloat. E.g., a sample hello world app created a folder tree containing 29,701 Files and 4,541 Folders! Additionally, it is all JavaScript, i.e., no python. It uses Node.js on the back-end. My projects tend to use Python/Flask for the backend (BE), and HTML/JS for the front end (FE).

I also found some help to create Python/Flask BE and JS/React FE, documented here. Its hello world only created a folder tree containing 2,936 Files, 239 Folders. But it did not seem to show interaction from the React GUI back to a REST API on the server. It seemed to use templates instead, which are really a type of server-side scripting. I was looking for an example of a REST API on the BE in Python/Flask, and how to call back to that REST API from a React FE client.

So, I rolled my own.

Now to keep this THE SIMPLEST, I did not use frameworks that minify and otherwise manage efficiently bundling and sending GUI code from the server. Again, this is great for production, but I wanted to just understand the concepts, and keep things as SIMPLE as possible. So the version I'll show you has the JSX compiling happening in the browser. This can be improved when I need to make things more complex and production ready.


So here is my version. It takes 3 files and 1 sub-folder. :) 






What the App Does


In this sample FE/BE app, the FE client calls the

     GET /ip 

HTTP method on the BE server to get the server's IP address. The BE calls out to another service, https://httpbin.org/ip, to get the server's external IP address. It returns this as JSON to the client. Since the httpbin.org service takes a little while (on my computer), you get a chance to see the React-based GUI render initially, then render again when the data data becomes available (the `tbd` is replaced by the real IP address).

The Server

This is the entire code of the server, which is in file app.py. It uses Python Flask to define 2 API endpoints, the root endpoint ('/'), and the "IP address" endpoint ('/ip').

from flask import Flask, render_template, jsonify
import requests

app = Flask(__name__)  

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/ip')
def ip():
    response = requests.get("https://httpbin.org/ip")
    json = response.json()
    result = json['origin'].split(',')[0]
    return jsonify({"ip":result})

if __name__ == '__main__':
    app.run()

The Client

This is the entire code of the client, which is in file templates/index.html. It uses React JS to define 2 components, an outer one and an inner one. The outer one called  <Display />  has some basic HTML structure for the overall app. The inner one called  <ShowIp />  renders the IP address returned from the server.

<html>

<head>
  <script src="https://unpkg.com/react@15/dist/react.min.js"> </script>
  <script src="https://unpkg.com/react-dom@15/dist/react-dom.min.js"> </script>
  <script src="https://unpkg.com/babel-standalone@6.15.0/babel.min.js"></script>
</head>

<body>
  <div id="root"></div>
  <script type="text/babel">
    /* ADD REACT CODE HERE */

    /* ref:
     * https://medium.freecodecamp.org/learn-react-js-in-5-minutes-526472d292f4
     */

    class ShowIp extends React.Component {
      constructor() {
        super();
        this.state = {
          ipValue: "tbd"
        };
        console.log('ctor')
      }
      
      componentWillMount() {
        console.log('willmount')
               
        fetch('/ip')
        .then(results => {
            return results.json();
          })
        .then(data => {
            console.log(JSON.stringify(data))
            this.setState({ipValue: data.ip});
          })
        }

      render() {
        return (
          <h3>Server's IP addr is: {this.state.ipValue} </h3>
        );
      }
    }

    class Display extends React.Component {
      render() {
        return (
          <div className="main">
          <h1>Welcome to the app</h1>
            <div className="ip">
              <ShowIp />
            </div>
          </div>
      );
      }
    }

    ReactDOM.render(
      <Display />, 
      document.getElementById("root")
    ); 

  </script>
</body>

</html>


To Run

To run it, do the following a Windows command prompt:

 set FLASK_APP=app.py
 set FLASK_ENV=development
 flask run


Then open your browser to:


You should see a (boring) UI that shows the IP address of the server you are running on.



That's about it.  If you give it a try, let me know how it goes, so I can improve this post.
The code is here on github.

Thanks,
 -Mark

Wednesday, December 9, 2015

Exploratory RDF SPARQL Queries

RDF is a schema-less technology for modeling data.  That is, no schema need be defined before you start asserting RDF data ("facts").

Not that it is really schema-less.  That is, even though you don't have to pre-declare a schema (such as a table definition in a SQL database), you generally follow some schema (set of properties and "types") as you define data.  (Without this, the resulting RDF base might be nearly unusable.)

When I find myself handed a new RDF base, I find it helpful to do some exploratory queries to find out what schema (formally declared or de facto) is being used.   If the creator of the RDF base is really nice, she will add schema information directly into the triple store, e.g. using OWL class or property declarations (a.k.a. the Ontology or Tbox).  If this is the case, I can just query for what classes and properties are defined, e.g.:

  ## Find declared classes
  prefix owl: <http://www.w3.org/2002/07/owl#>
  select distinct ?class
  where {
   ?class a owl:Class 
  }
  limit 200  # optional limit

and

  ## Find declared properties 
  prefix owl: <http://www.w3.org/2002/07/owl#>
  select distinct ?prop
  where {
   { ?prop a owl:DatatypeProperty }
   UNION
   { ?rrop a owl:ObjectProperty }
  }
  limit 200  # optional limit

However, often the RDF base is not that friendly and such queries return nothing.  The next approach is to find everything used as a type or property by brute force, e.g.:

  # list types used
  select distinct ?type {
   ?s a ?type .
  }
  order by ?type

and

  # list properties used
  select distinct ?prop {
   ?s ?prop ?o .
  }
  order by ?prop

This does work.  However, it runs into problems if the number of triples is large (e.g. tens of millions or more), because this does a full scan through every triple in the store!    Not good for large stores--it puts a huge load on the store, and may never return (at least not before killing your store).

So what if the triple store is large, and does not contain the Tbox declarations that make it easy to find the properties and classes used?  Read on...

Large Scale Triple Store Exploration using Samples

Well, here's an approach inspired by the mongoDB Compass tool.  The Compass tool uses a sample of an overall document database to provide insight into its schema.

The SPARQL queries below take the same approach for an RDF store:  they seek to get a feel for the full de facto schema of a large RDF data set by approximating the schema using only a small sample of the overall triples in the store.

The queries are:

  # Count type instances in a sample (for large TS)
  select distinct ?type (count (?type) as ?count)
  { ?s a ?type .
    {
      select *
      {?s ?p ?type.}
      limit 10000
    }
  } 
  group by ?type
  order by ?type

and

  # Count property instances in a sample (for large TS)
  select distinct ?prop (count (?prop) as ?count)
  { ?s ?prop ?o .
    {
Properties from sample query
      select *
      {?s ?prop ?o.}
      limit 10000
    }
  } 
  group by ?prop
  order by ?prop

These queries do a SPARQL subquery first, to get only a limited number of triples.  This limits the table scan to only a very small subset of the overall triple store.  Then they analyze only that small sample for type or property usage.

In these queries, we get only the first 10,000 triples the triple store wants to give us.  We then extract the de facto types (1st query) or properties (2nd query), with a count of how many times each is used in the sample.  (These counts could be used to approximate relative usage of each of the types / properties.)

The example on the right shows the properties used in the LUBM-100 data set, as returned by the query.  The counts in the right column help give us a quick feel for relative level of usage of each property.

Yes, this is only an approximation of the actual schema.
There is no guarantee how much of the schema this approach will actually discover.  But in tests I ran against a fairly large triple store (LUBM 100) it seemed to do a pretty good--and very fast!--job when using 10,000 triples as the sample size.  Your sample size could vary, depending on how performant your store is.

Happy exploring!

Thursday, May 31, 2012

SPARQL query from JavaScript

There are JavaScript libraries out there for SPARQL, but it's actually quite simple to query SPARQL from JavaScript without using any special library.  Here is an example of making a SPARQL query directly from a web page using JavaScript.

<html> 
  <head> 
    <title> SPARQL JavaScript </title>
    <script>
    /**
     * Author: Mark Wallace
     *
     * This function asynchronously issues a SPARQL query to a
     * SPARQL endpoint, and invokes the callback function with the JSON 
     * Format [1] results.
     *
     * Refs:
     * [1] http://www.w3.org/TR/sparql11-results-json/
     */
    function sparqlQueryJson(queryStr, endpoint, callback, isDebug) {
      var querypart = "query=" + escape(queryStr);
    
      // Get our HTTP request object.
      var xmlhttp = null;
      if(window.XMLHttpRequest) {
        xmlhttp = new XMLHttpRequest();
     } else if(window.ActiveXObject) {
       // Code for older versions of IE, like IE6 and before.
       xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
     } else {
       alert('Perhaps your browser does not support XMLHttpRequests?');
     }
    
     // Set up a POST with JSON result format.
     xmlhttp.open('POST', endpoint, true); // GET can have caching probs, so POST
     xmlhttp.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
     xmlhttp.setRequestHeader("Accept", "application/sparql-results+json");
    
     // Set up callback to get the response asynchronously.
     xmlhttp.onreadystatechange = function() {
       if(xmlhttp.readyState == 4) {
         if(xmlhttp.status == 200) {
           // Do something with the results
           if(isDebug) alert(xmlhttp.responseText);
           callback(xmlhttp.responseText);
         } else {
           // Some kind of error occurred.
           alert("Sparql query error: " + xmlhttp.status + " "
               + xmlhttp.responseText);
         }
       }
     };
     // Send the query to the endpoint.
     xmlhttp.send(querypart);
    
     // Done; now just wait for the callback to be called.
    };
    </script>
  </head>

  <body>
    <script>
      var endpoint = "http://dbpedia.org/sparql";
      var query = "select * {?s ?p ?o} limit 5" ;

      // Define a callback function to receive the SPARQL JSON result.
      function myCallback(str) {
        // Convert result to JSON
        var jsonObj = eval('(' + str + ')');

        // Build up a table of results.
        var result = " <table border='2' cellpadding='9'>" ;
        for(var i = 0; i<  jsonObj.results.bindings.length; i++) {
          result += " <tr> <td>" + jsonObj.results.bindings[i].s.value;
          result += " </td><td>" + jsonObj.results.bindings[i].p.value;
          result += " </td><td>" + jsonObj.results.bindings[i].o.value;
          result += " </td></tr>"; 
        } 
        result += "</table>" ;
        document.getElementById("results").innerHTML = result;
     }
      
     // Make the query.
     sparqlQueryJson(query, endpoint, myCallback, true);
      
    </script>

    <div id="results">
      It may take a few moments for the info to be displayed here...
      <br/><br/>
      Run me in Internet Explorer or I get Cross Domain HTTP Request errors!
    </div>
  
  </body>
</html>

 
In the head section, the code defines a function, sparqlQueryJson(), that takes a SPARQL query string, a SPARQL endpoint URL, and a function to call when the result is ready.  (The optional fourth parameter will show you the raw JSON SPARQL results in an alert window if you set it to true.)  In the body section, the code specifies the query string, endpoint, and callback function, and then calls sparqlQueryJson() to issue the request.

Put the above code in a file called sparql.htm, and give it a try in your browser!

A few things to note:
  1. Most browsers won't let you run this code because it makes a cross-domain request (calls a service on a different host than the HTML was served from).  Use IE and if/when prompted, allow the content. 
  2. I use an asynchronous XMLHttpRequest to perform the query to the SPARQL endpoint.
  3. It would be best to put the sparqlQueryJson() function in a separate file to make it reusable from multiple pages.  I put everything in one file here just to simplify the example slightly. 



Friday, February 3, 2012

What Makes a Wiki Semantic?

A wiki is a web site that allows users to create and edit pages in an easy-to-format way (not HTML). They can easily create links on those pages to other pages in the wiki (and to pages on other web sites). The wiki usually keeps history of page edits, and allows rollback of pages to previous versions. New users can usually create accounts for themselves, and therefore page edits can be tracked based on user. Wikipedia is undoubtedly the most famous example of a wiki.

But what is a semantic wiki? I believe that there are four basic features that, when taken together, transform a wiki into a semantic wiki. While there can certainly be more features than just these four in a semantic wiki, I think that it is these four that must minimally be there to make a wiki semantic.

The first feature is that pages can be typed. That is, they can be marked as representing a certain "type" of thing, e.g. a book or a person or a city or an event. (Another word for type could be "category" or "class".) This type can simply be a word that has meaning to the wiki users, e.g. "Person", "City", etc. Different wiki technologies can differ on how this type is associated with a page (e.g., it can simply be another markup element that can be added to the wiki text of a page).

The second is that page links are assigned meaning. That is, hyperlinks from one page to another can be assigned more meaning than just "this is a link to a page"; the link can be assigned a "type". E.g. a link from a page about a book to a page about the author of that book might be assigned a type "authored-by", or "has-author". Again, this link type can simply be a word that has meaning to the wiki users, e.g. "authoredBy", "located-in", etc.

The third is that data values within a page can be assigned a meaning, e.g. the number 200,000 in a page about a city could be assigned the meaning of "population". Once again, "meaning", at its simplest level, is just associating a word from some vocabulary with the value. This can be thought of as an "attribute name" that goes with the value.

Finally, all of this semantic information can be used in dynamic queries that build tables (or other content) on the page, on-the-fly, by querying the semantic information. E.g., a table could be created that lists the top ten cities in a particular country by population by embedding a query into the page. This is certainly preferable to a person having to manually keep a summary of most populous cities up to date, requiring them to periodically review the wiki for new cities in a particular country, and determining if the top ten most populous cities has changed, and hand editing the changes into a summary table. In contrast, a table built on dynamic queries will always be accurate and instantly up-to-date (given that the semantics on the city pages are accurate), even as new city pages are added or population numbers are changed over time!

In summary, the four key elements that I believe make a wiki semantic are:
  1. The ability to type pages
  2. The ability to assign meaning to links between pages
  3. The ability to assign meaning to data values within a page, and
  4. The ability to query this knowledge to dynamically generate content
While there can certainly be more features than just these four in a semantic wiki, I think that it is these four that must minimally be there to make a wiki semantic.

Wednesday, December 22, 2010

Custom Rules for Jena Reasoner

Here is an example of creating a custom RDFS++ reasoner using Jena 2.6.2. By RDFS++, I mean the following key rules in RDFS, which are:

rdfs:range, rdfs:domain, rdfs:subClassOf, rdfs:subPropertyOf

and the addition of these lightweight but useful OWL rules:

owl:inversOf, owl:TransitiveProperty, owl:sameAs

The code uses Jena GenericRuleReasonser. This example was done with Jena 2.6.2.

Here is the code to the generic inference (ginfer) program:

$ type ginfer.java

import com.hp.hpl.jena.rdf.model.*;
import com.hp.hpl.jena.reasoner.*;
import com.hp.hpl.jena.vocabulary.*;
import com.hp.hpl.jena.reasoner.rulesys.*;

/** Read RDF XML from standard in; infer and write to standard out. */
class ginfer {
public static void main (String args[]) {

// Create an empty model.
Model model = ModelFactory.createDefaultModel();

// Read the RDF/XML on standard in.
model.read(System.in, null);

// Create a simple RDFS++ Reasoner.
StringBuilder sb = new StringBuilder();
sb.append("[rdfs2: (?x ?p ?y), (?p rdfs:domain ?c) -> (?x rdf:type ?c)] ");
sb.append("[rdfs3: (?x ?p ?y), (?p rdfs:range ?c) -> (?y rdf:type ?c)] ");

sb.append("[rdfs6: (?a ?p ?b), (?p rdfs:subPropertyOf ?q) -> (?a ?q ?b)] ");
sb.append("[rdfs5: (?x rdfs:subPropertyOf ?y), (?y rdfs:subPropertyOf ?z) -> (?x rdfs:subPropertyOf ?z)] ");

sb.append("[rdfs9: (?x rdfs:subClassOf ?y), (?a rdf:type ?x) -> (?a rdf:type ?y)] ");
sb.append("[rdfs11: (?x rdfs:subClassOf ?y), (?y rdfs:subClassOf ?z) -> (?x rdfs:subClassOf ?z)] ");

sb.append("[owlinv: (?x ?p ?y), (?p owl:inverseOf ?q) -> (?y ?q ?x)] ");
sb.append("[owlinv2: (?p owl:inverseOf ?q) -> (?q owl:inverseOf ?p)] ");

sb.append("[owltra: (?x ?p ?y), (?y ?p ?z), (?p rdf:type owl:TransitiveProperty) -> (?x ?p ?z)] ");

sb.append("[owlsam: (?x ?p ?y), (?x owl:sameAs ?z) -> (?z ?p ?y)] ");
sb.append("[owlsam2: (?x owl:sameAs ?y) -> (?y owl:sameAs ?x)] ");

Reasoner reasoner = new GenericRuleReasoner(Rule.parseRules(sb.toString()));

// Create inferred model using the reasoner and write it out.
InfModel inf = ModelFactory.createInfModel(reasoner, model);
inf.write(System.out);
}
}


Here is some data for demonstration.

$ type data.ttl
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix demo: <http://example.com/demo#> .

demo:Person a owl:Class.
demo:hasAncestor rdfs:range demo:Person ; rdfs:domain demo:Person .
demo:parentOf rdfs:subPropertyOf demo:ancestorOf ; owl:inverseOf demo:childOf .

demo:ancestorOf owl:inverseOf demo:hasAncestor ; a owl:TransitiveProperty .
demo:Trilby demo:parentOf demo:MarkB .
demo:Mark demo:parentOf demo:Elizabeth .
demo:MarkB owl:sameAs demo:Mark .


Here we use the jena.rdfcat program to convert the before and after reasoning data to a sorted N-Triples format so we can compare the two.


$ java jena.rdfcat -out ntriples data.ttl | sort >before.nt

$ java jena.rdfcat data.ttl | java ginfer | java jena.rdfcat -out ntriples -x - | sort >after.nt

And here is the comparison. Everything shown is a triple that was not in the original data, but was inferred by executing the rules.

$ diff before.nt after.nt
2a3,9
> <http://example.com/demo#childOf> <http://www.w3.org/2002/07/owl#inverseOf> <http://example.com/demo#parentOf> .
> <http://example.com/demo#Elizabeth> <http://example.com/demo#childOf> <http://example.com/demo#Mark> .
> <http://example.com/demo#Elizabeth> <http://example.com/demo#childOf> <http://example.com/demo#MarkB> .
> <http://example.com/demo#Elizabeth> <http://example.com/demo#hasAncestor> <http://example.com/demo#Mark> .
> <http://example.com/demo#Elizabeth> <http://example.com/demo#hasAncestor> <http://example.com/demo#MarkB> .
> <http://example.com/demo#Elizabeth> <http://example.com/demo#hasAncestor> <http://example.com/demo#Trilby> .
> <http://example.com/demo#Elizabeth> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://example.com/demo#Person> .
4a12,15
> <http://example.com/demo#hasAncestor> <http://www.w3.org/2002/07/owl#inverseOf> <http://example.com/demo#ancestorOf> .
> <http://example.com/demo#Mark> <http://example.com/demo#ancestorOf> <http://example.com/demo#Elizabeth> .
> <http://example.com/demo#Mark> <http://example.com/demo#childOf> <http://example.com/demo#Trilby> .
> <http://example.com/demo#Mark> <http://example.com/demo#hasAncestor> <http://example.com/demo#Trilby> .
5a17,24
> <http://example.com/demo#Mark> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://example.com/demo#Person> .
> <http://example.com/demo#Mark> <http://www.w3.org/2002/07/owl#sameAs> <http://example.com/demo#Mark> .
> <http://example.com/demo#Mark> <http://www.w3.org/2002/07/owl#sameAs> <http://example.com/demo#MarkB> .
> <http://example.com/demo#MarkB> <http://example.com/demo#ancestorOf> <http://example.com/demo#Elizabeth> .
> <http://example.com/demo#MarkB> <http://example.com/demo#childOf> <http://example.com/demo#Trilby> .
> <http://example.com/demo#MarkB> <http://example.com/demo#hasAncestor> <http://example.com/demo#Trilby> .
> <http://example.com/demo#MarkB> <http://example.com/demo#parentOf> <http://example.com/demo#Elizabeth> .
> <http://example.com/demo#MarkB> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://example.com/demo#Person> .
6a26
> <http://example.com/demo#MarkB> <http://www.w3.org/2002/07/owl#sameAs> <http://example.com/demo#MarkB> .
9a30,33
> <http://example.com/demo#Trilby> <http://example.com/demo#ancestorOf> <http://example.com/demo#Elizabeth> .
> <http://example.com/demo#Trilby> <http://example.com/demo#ancestorOf> <http://example.com/demo#Mark> .
> <http://example.com/demo#Trilby> <http://example.com/demo#ancestorOf> <http://example.com/demo#MarkB> .
> <http://example.com/demo#Trilby> <http://example.com/demo#parentOf> <http://example.com/demo#Mark> .
10a35
> <http://example.com/demo#Trilby> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://example.com/demo#Person> .

$

Sunday, May 30, 2010

Speaking at SemTech 2010

I'll be speaking this year at SemTech 2010. The full list of presentations is available online.

My one-hour talk is Tuesday, June 22, at 5pm. It's part of the Ontology Design and Engineering track, in the Technical-Advanced category, and is entitled "Rapid Prototyping with the Jena Command Line Utilities".

It should be a fun talk that includes demonstrations of how to use the utilities for file conversion (see previous post), RDF file merging, SPARQL queries, and filtering triples. There is also some leave-behind code in the slides so you can write your own Jena inference utility.

I hope to see you there!

Followers