Our VisLOD tutorial is arranged on May 26, 2014 in Anissaras (Crete, Greece) at ESWC2014 conference. The idea is to bring together researchers and practitioners interested in visual and interactive techniques for exploring Linked Open Data and Social Media for e-Governance. Our hands-on tutorial will cover technical aspects from two perspectives:
Results may vary: reproducibility, open science and all that jazz.
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc “made reproducible”, to pre-hoc “born reproducible”? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? In this talk, which I gave as a keynote at the 2013 joint conference Intelligent Systems in Molecular Biology / European Conference on Computational Biology, I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
10.00-10:30 Paper session I
Timo Willemsen, Anton Feenstra and Paul Groth. Building Executable Biological Pathway Models Automatically from BioPAX
10.30 -11.00 break
11.00 – 12.45 Paper session II
Guillermo Palma, Maria-Esther Vidal, Louiqa Raschid and Andreas Thor. Exploiting Semantics from Ontologies and Shared Annotations to Find Patterns in Annotated Linked Open Data
Cameron Mclean, Mark Gahegan and Fabiana Kubke. Capturing intent and rationale for Linked Science: design patterns as a resource for linked laboratory experiments
Jun Zhao, Graham Klyne, Matthew Gamble and Carole Goble. A Checklist-Based Approach for Quality Assessment of Scientific Information
13.45-15.30 Paper session III
Nico Adams, Armin Haller, Alexander Krumpholz and Kerry Taylor. A Semantic Lab Notebook – Report on a Use Case Modelling an Experiment of a Microwave-based Quarantine Method
Niels Ockeloen, Antske Fokkens, Serge Ter Braake and Piek Vossen. BiographyNet: Managing Provenance at multiple levels and from different perspectives
Michiel Hildebrand, Rinke Hoekstra and Jacco van Ossenbruggen. Using Semantic Web Technologies to Reproduce a Pharmacovigelance Case Study
16.00-17.30 Co-writing session: how can linked science techniques solve problems in scientific reproducibility 2+45 minutes
1st 45 minutes: produce reproducibility problem/potential linked science technology matrix in breakout groups
2nd 45 minutes: merging matrices into consensus view and/or paper ideas/blog post
With this tutorial you can build an interactive web application with R that fetches up-to-date lecture data from the data.aalto.fi SPARQL endpoint, renders the result both as a table and a calendar-like chart, and offers a way to download data as iCal calendar events.
The openly available R package SPARQL allows to directly connect to Linked Data and use the SPARQL querying language for selecting interesting part of data for analysis. Thus it enables to meet massive and rich data sets with the analytical power of the R language and environment.
This approach and tools contribute to Linked Science and Open Science movements to support the transparency of science and to conduct transdisciplinary research.
In this tutorial we will introduce the idea and concepts about Linked Science, and show via illustrative examples about how to practically query and analyze Linked Data from within R environment for statistical analysis.
The overall goal of LIFE is to facilitate sharing of spatio-temporal information and thus improve interdisciplinary collaboration in science and education. The approach addresses all kinds of resources, ranging from articles and books through maps to raw data. The Linked Data approach will be used as a basis for the university library’s eScience services to seamlessly integrate their offerings into both the scientific and the global information infrastructure. These eScience services will enable researchers and students to systematically navigate the dynamic and heterogeneous global network of spatio- temporal information (discovery) and to create the relevant views (access) meeting their information needs. LIFE is a research activity in the Linked Open Data University of Münster (LODUM) initiative that fosters exchanging scientific and educational data as Linked Data.
The Institute for Geoinformatics is looking for highly motivated candidates to fill one position at the post-doc level and one at the PhD student level. Both positions are fully funded for two years, starting in January 2013. Salary levels are at the TVL salary scale (TVL-E 13 for the PhD student, ~40k EURO p.a. gross, and TVL-E 14 for the post-doc, ~45k EURO p.a. gross, commensurate with experience). The candidate filling the PhD student position is expected to join the graduate school for geoinformatics. The LIFE team will be completed by up to four student assistants.
Application profile for both openings:
strong background in at least one of the following areas:
geographic information science
semantic web technologies / linked data o science 2.0 / open science
programming experience, especially in web development on an open source stack
interest in research challenges in the area of semantic interoperability and data integration
Applications should be sent to Prof. Dr. Werner Kuhn (email@example.com) in a single PDF file. Applications from women are encouraged and will be favored in case of equal qualification, competence and specific achievements. Preference will be given to disabled applicants in case of equivalent qualification.
The application process remains open until October 15, 2012 or until the positions are filled.
Linked Science is an approach to interconnect scientific assets in order to enable transparent, reproducible and transdisciplinary research. Tutorial on Linked Science 2012 (TOLSCI2012) will be a half day tutorial comprising of different aspects of Linked Science: semantic description of scientific data (e.g. observations and measurements), existing vocabularies, bridging of statistical analysis and Linked Data, and license and copyright issues about data.
Through exercises and the introductory talk the aim of TOLSCI2012 is to stimulate transdisciplinary discussions among researchers and publishers from various backgrounds on semantic integration of scientific information.
The main part of the tutorial will concentrate in a hands-on session in order to learn how to describe, access and analyze scientific data about scientific observations, and especially how to get only that part of data which is of interest for a given research question.
We will teach
how Linked Data solves the access part, and
how SPARQL allows to query only a subset of the data.
In particular, the participants will learn in a hands-on session
how Linked Data can be connected with the help of the SPARQL package for statistical analysis in R, and
how and which visualization techniques and tools are available for interacting with the data.
Tomi Kauppinen is a postdoctoral researcher in the Muenster Semantic Interoperability Lab (MUSIL) at the Institute for Geoinformatics at the University of Muenster, Germany. He holds a PhD from the Aalto University, Finland with a thesis on reasoning about change and time. He chaired the First International Workshop on Linked Science 2011 at the International Semantic Web Conference (ISWC2011), the track on Interoperability and Semantics of the Geoinformatik 2011 conference, and led the breakout session for Vocabularies for Science at Science Online London 2011 organized by Nature. His research focuses on spatiotemporal and semantic modeling of processes such as deforestation, extreme weather events, changes in administrational borders, digital cultural heritage, and linked science. His current projects include opening and linking of scientific and educational data in LinkedScience.org-project and in the Linked Open Data University of Muenster (LODUM). He coordinates activities as a post-doc in the International Research Training Group on Semantic Integration of Geospatial Information.
Willem Robert van Hage is a researcher in the field of information integration on the web. His main research topics in the past years are geospatio-temporal semantics, ontology alignment, and ontology learning. He is a co-organizer of the Detection, Representation, and Exploitation of Events in the Semantic Web workshop (DeRiVE 2011) and since 2006 he has been a co-organizer of the Ontology Alignment Evaluation Initiative (OAEI), a collaborative benchmarking effort for the evaluation of on- tology alignment techniques. He has led the development of the Simple Event Model (SEM), an ontology for the description of events. In the past years he has worked on the combination of Semantic Web reasoning (RDF(S), OWL) and geospatio-temporal reasoning, developing a spatiotemporal indexing package for the popular SWI-Prolog programming language, which has led to a best paper award at the EKAW 2010 conference, and Semantic Web packages for SPARQL querying and RDF storage for the R statistical programming language. He is the coordinator of the interfaculty Web Science minor at the VU University Amsterdam.
Today on May 29th a poster and a demo will be presented at the 9th Extended Semantic Web Conference (ESWC2012) related to Linked Science and LODUM projects. The poster and the demo will be presented next to each other at the Posters and Demos session for your convenience.
is about the Linked Open Data University of Münster – Infrastructure and Applications (see also the data portal). The idea is to open up the university’s data silos, integrate the data, and make it easy to build applications on top of the data collection. The productivity map shown as a video below is an example of such an application. It renders the university buildings in 3D—the building height indicates the number of publications written by researchers working in the respective building. The KML file is also available for download—just open it up in Google Earth to explore the productivity map.