SePublica 2015. “Do Show, don’t tell!”
Challenge us with a creative submission. Let us see and experience your take on publications for the Web of data; we want to see NEW ideas in action, INNOVATION that makes our lives easier. Is it about scholarly communication? Then what problem are you solving? How are you making the content interoperable? How is this different/better from traditional approaches?
“Do Show, don’t tell!”
At SePublica we are interested in addressing the question, how is Semantic Web technology being used as part of publication workflows. Advances in technology have made it possible for publications to adopt electronic dissemination channels, from paper-based to purely electronic formats—e.g. EPUB, HTML, PDF. However, in spite of improvements in the distribution, accessibility and retrieval of information, the connective tissue promised by the Semantic Web is still rare in most publications. We want to help understand how the Semantic Web is supporting publication workflows in, but not limited to, scholarly communication, e-science, and new trends in journalism.
The Web has succeeded as a dissemination platform for news, scientific and non-scientific papers, and communication in general. However, most of that information remains locked up in discrete digital documents that are, sometimes, replicates of their print ancestors. Without machine-friendly content, the level in which data can be explored is limited. For instance, data journalism reflects the increased interaction between content producers (journalists) and several other fields such as design, computer science and statistics. From the point of view of journalists, data journalism represents "an overlapping set of competencies drawn from disparate fields". Journalists are adapting data-driven arguments.
Likewise, the validation of scientific results requires reproducible methods: for reproducibility, data, processes, and algorithms used in the original experiments should be made available in a complete and computationally amenable form. Although biomedical journals often ask for “Materials and Methods” and datasets to be made available, reproducing experiments, sharing, reusing and leveraging scientific data is becoming increasingly difficult. Experimental data in scientific disciplines is a Big Data problem; how can we make effective use of scientific data, how should it be semantically represented, interlinked, reused, how can we effectively represent experiments in scientific publications? How to bridge the gap between publications and data repositories? As both Europe and the US are embarking on big science, e.g., Brain Activity Map (BAM), Human Brain Project (HBP), CERN experiments, massive amounts of data are being generated. Just like in the Human Genome Project, as data is produced, the needs for data management grows exponentially, eventually surpassing those inherent to laboratory work. Thus, data standards and ontologies will become more and more necessary to laboratory sciences. Gaining a deeper understanding of disorders such as schizophrenia, Alzheimer's, suicide and PTSD, amongst others, will require a much more sophisticated infrastructure than those we have so far seen. How are the Semantic Web and ontologies supporting reproducibility and replicability in e-research infrastructures? How is this translated to scholarly publications? Scholarly data and documents are of most value when they are interconnected rather than independent.
Without machine processable data, the possibilities of the Web will remain limited. The network effect of data organically grows in the context of the Web. For open data to succeed the ability to interconnect and join, to summarise and compare, to monitor, extrapolate, and to infer is central. In this way we will soon be able to see paradigm shifts taking place across domains; how is this happening in data journalism? scholarly communication? in web-based communication in general? The Web becomes a platform; we are starting to see this in some e-science domains, however, open challenges remain ahead.