Skip to content

Prototype deployment details

timrdf edited this page Feb 4, 2012 · 46 revisions

What's first

What we'll cover

This page covers the details for how to set up the FAqT Brick at http://sparql.tw.rpi.edu/datafaqs/dump (and http://sparql.tw.rpi.edu:3030/datafaqs/query) and the FAqT Brick Explorer at http://aquarius.tw.rpi.edu/datafaqs. It can be used as a template for setting up other FAqT Bricks and explorers. This DOES NOT cover how to deploy FAqT Services; it assumes that it has already been done.

Let's get to it

sparql.tw

Get the DataFAQs utilities:

  • cd /opt
  • sudo git clone git://github.com/timrdf/DataFAQs.git

Get some dependencies:

csv2rdf4lod-automation is needed for a couple of RDF-handling utilities.

  • cd /opt
  • sudo git clone git://github.com/timrdf/csv2rdf4lod-automation.git
  • ./install.sh

tdb is needed to create a triple store of the RDF files accumulated in the FAqT Brick

  • sudo mkdir /opt/tdb
  • cd /opt/tdb
  • curl -L http://sourceforge.net/projects/jena/files/TDB/TDB-0.8.10/tdb-0.8.10.zip/download > tdb-0.8.10.zip
  • gunzip

fuseki is needed to expose the tdb triple store as a SPARQL endpoint

  • sudo mkdir /opt/fuseki
  • cd /opt/fuseki
  • wget http://openjena.org/repo/org/openjena/fuseki/0.2.0/fuseki-0.2.0.zip
  • gunzip

virtuoso can also be used in addition to -- or in place of -- fuseki.

  • TODO describe after we've migrated to datafaqs.aquarius.tw

rapper is needed to retrieve and reserialize RDF (and count triples)

Set up a FAqT Brick:

  • sudo mkdir /srv/DataFAQs
  • sudo chown user:group /srv/DataFAQs
  • mkdir -p /srv/DataFAQs/default/faqt-brick

Set the following [environment variables](DATAFAQS environment variables) in /srv/DataFAQs/default/faqt-brick/datafaqs-source-me.sh:

export DATAFAQS_HOME="/opt/DataFAQs"
export DATAFAQS_BASE_URI="http://sparql.tw.rpi.edu"
export DATAFAQS_PUBLISH_TDB="true"
export DATAFAQS_PUBLISH_TDB_DIR="/srv/DataFAQs/default/faqt-brick/tdb"
export DATAFAQS_PUBLISH_THROUGHOUT_EPOCH="true"
export DATAFAQS_PUBLISH_METADATA_GRAPH_NAME="http://www.w3.org/ns/sparql-service-description#NamedGraph"
export CSV2RDF4LOD_HOME="/opt/csv2rdf4lod-automation"
export TDBROOT="/opt/tdb/TDB-0.8.10/"
export PATH=$PATH`/opt/DataFAQs/bin/df-situate-paths.sh`

Create an epoch (see FAqT Brick):

  • cd /srv/DataFAQs/default/faqt-brick
  • /opt/DataFAQs/bin/df-epoch.sh

Link the FAqT Brick to the web:

If tdb didn't load during epoch creation, load the triple store:

  • df-load-triple-store.sh --recursive-by-sd-name

Turn on the SPARQL endpoint:

Deploy lodspeakr:

(aquarius.tw)

What's next

Clone this wiki locally