Schlagwort: Research Data

Happy Birthday DataCite!

An interview with Britta Dreyer on ten years of DataCite: a success story around the Digital Object Identifier (DOI) The foundation of the association DataCite celebrated it’s 10th anniversary in the beginning of December 2019, a reason to look back and peak into the future. In the interview Britta Dreyer, head of the department PID and Metadata Services at TIB – Leibniz Information Centre for Science and Technilogy and DataCite Business Manager, talks about what has been achieved, the challenges of the future and the cooperation with DataCite partners Ten years of DataCite, that is a great success. But actually the history of DataCite started a little bit before. Namely in 2004 with the first registration of a Digital Object Identifier (DOI), a unique and persistent digital identifier, at TIB. Tell us how it all began. Scientific research generates a gigantic and constantly growing amount of digital research data. These data sets are of immense importance for science. On the one hand, they increase the transparency and traceability of research results. On the other hand, the rapid technological developments provide the opportunity to use the research data for further research projects. These reasons were decisive in the TIB’s decision to become the world’s first DOI registration agency for research data in 2005. It emerged from the project “Publikation und Zitierbarkeit von Primärdaten” (Publication and citeability of primary data) – STD-DOI for short – funded by the German Research Foundation (DFG). The aim was and still is to make research data

Rückblick auf unseren “FAIR Data and Software”-Workshop

Mitte Juli versammelten sich knapp 25 junge Wissenschaftler*innen und 4 externe Lehrende an der TIB, um gemeinsam die Anwendung der FAIR-Prinzipien auf Datensätze und selbst-geschriebene, wissenschaftliche Software zu üben. Das experimentelle Format kombinierte theoretischen Unterricht über die FAIR-Prinzipien und ihre Bedeutung für die Wissenschaftler, mit praktischen Live-Programmier- und Datenanalyse-Übungen nach dem Modell der Software, Data & Library Carpentries.