Provision of scientific objects using smart contracts on a blockchain – how and why?
Nowadays on the web, centralised servers are used to make available all kinds of scientific objects, among other things. Such hosts are bottlenecks that cause many errors and problems. It would be better, as I argued in part one of my blog series on the use of blockchains in education and research, to ensure the availability of scientific objects by means of a decentralised P2P file system. As I hypothesised in my first post, the persistent designation and distribution of objects addressed in my post would then be clearly separated from the guarantee of permanent availability as one of various modular services that would additionally be provided in such a scenario. In today’s post, I would like to go into the question how institutions that are accountable to the public can ensure availability by means of “smart contracts” on a blockchain, and why this new opportunity offers crucial advantages over current business models relating to digital scientific archives.
The role played by “proof of work” in blockchains
The cyber currency bitcoin is a P2P network in which continuous transactions between network participants are organised in the bitcoin currency unit. Each individual transaction is recorded in a public, distributed ledger: the blockchain. In order to ensure that a series of correct transactions is aggregated in a block, a mathematical problem must be solved that would otherwise play no role at all. The components of this mathematical problem are the checksums of the previous block in the public ledger as well as the checksum of the potential new block. This is known as “proof of work”. The first person to solve the task correctly then has it checked by other network participants. If solved correctly, she is automatically rewarded with a virtual currency unit. Such “proofs of work” are performed not only for the “mining” of new blocks described above, but also for every single transaction within the bitcoin network. Among other things, “proof of work” is an effective way of preventing the network from collapsing under a flood of invalid or malicious transactions that would have to be verified by the network.
How smart contracts work on a blockchain
Vitalik Buterin launched a new blockchain in 2014, called Ethereum. (Literature on Ethereum in TIB’s collections.) Buterin’s (main) objective, however, was not to create another cyber currency. Rather, the pivotal new idea was to combine “proof of work” with a “payload”. The idea behind the innovation was for those who execute transactions in the Ethereum network or who are involved in mining new blocks to have to run a small piece of programme code assigned to them at random by the network. Each transaction only becomes valid once proof has been given that the associated programme code has been run. This makes it possible to run decentralised applications. The network is paid for running the application in the Ethereum currency ETH. Anyone can view the code, but as soon as the programme has been paid for, it keeps running and can no longer be stopped by anyone. If a computer within the network fails before it can be proven that the assigned programme step has been executed, another computer automatically takes on the task.
The interesting thing about these innovative applications is that they take place reliably and can be monitored without having to trust a central entity. This is particularly interesting for financial services and contracts in general. After all, if a user has to rely on a third party for a contract to become effective, it makes the transaction vulnerable and expensive. The prospect of having reliable self-executing contracts that monitor themselves, i.e. “smart contracts”, is one of the main reasons why millions are currently being invested in blockchains such as Ethereum. (A more detailed introduction to blockchain-based smart contracts that is well worth reading, which also addresses the typical problems involved in this approach, appeared recently on the BlockGeeks website.)
Organising the provision of scientific objects using blockchains
MaidSafe, Storj, Sia and Filecoin (an IPFS sister project) are blockchains that specialise in reliable storage of objects. In order to earn virtual currency units, those who participate in the respective network provide some disk space and bandwidth to the decentralized app governing the network. The app splits the objects into smaller parts, encrypts them, and distributes these parts redundantly across the network, in order to keep them available for their owners. Virtual currency units owned by the network’s participants can either be used in order to store their own objects, or they can be sold against other currencies on the free market – to other people who might be interested to buy decentralized storage services (or who might just want to speculate in them).
A university or a scientific society might participate in such a blockchain, either by “investing” some of their own server capacity, and/or simply through buying some of the blockchain’s currency units out of their own budget. Subsequently, the institution could store objects from their research community on this blockchain-based storage service. A smart contract running on the blockchain could regularly keep count about these exchanges, and make them publicly available.
How would the general public benefit from these smart contracts?
- A smart contract contains a precise definition of the level of availability guaranteed for which objects over which period.
- The performance of the contract is subject to public scrutiny.
- The contract can be copied, adapted and reused as a template for similar such projects.
The indirect advantage for the institutions utilizing the contract, as publicly financed institutions, is that their objectives and success are made open to public scrutiny.
Irrespective of this, it could make sense from the standpoint of a funding agency or a ministry of science, for example, to provide virtual currency units – so-called crypto tokens – for the smart contract. Some of this amount is then automatically paid out to a recipient by the contract, following proof that certain contract criteria have been met.
In the future, in addition to the provision of objects, other supplementary services relating to scientific objects could also be integrated into self-executing smart contracts. Examples include quality testing, format conversion and indexing.
Extent of public accessibility of blockchains in such models
Research organisations could also launch their own blockchain. The open source software program Hyperledger is often used for industrial applications. Configured as a “permissioned blockchain”, organisations could then specify who is permitted to participate in the blockchain, for example. In such a model, crypto tokens can be shielded from speculative trade on the free market. This may be appropriate with a view to ensuring that the calculable risk surrounding the investment and spending of public money via the blockchain remains low.
It goes without saying that the same is conceivable in the other direction: financial resources are publicly raised to comply with a smart contract – more or less as a start-up with a minimum amount of organisational effort. Investor Chris Dixon describes why crypto tokens facilitate the creation of business models that differ significantly from those of players currently established on the web. Decentralised Autonomous Organisations (DAO) can even manage the funds raised for such projects autonomously via a smart contract, e.g. in the form of concluding new smart contracts.
In principle, it is also possible to completely hide one’s “own” blockchains from the public. However, since we are concerned here with the application of blockchains in education and research, we are talking of services and funds whose use is accountable to the public. For this reason, blockchains should always be operated publicly in this case. With this aspect in mind, it is even appropriate in terms of information ethics to use blockchain protocols that are open documented and freely licensed. (Which, of course, is now the rule in the area of blockchain software anyway.)
Who will be first to dare use decentralised applications and organisations in research? Systemic barriers and pioneers
Many education and research organisations still consider it too risky to offer crypto tokens on the free market, let alone to establish DAOs – and not just because of the shaky start of the first DAO in 2016. This is compounded by the fact that popular science start-ups such as FigShare and ResearchGate now pursue the “traditional” business models of the web described by Chris Dixon. Precisely this type of player will find it difficult to develop a vision for a decentralised organisation (see also Benedikt Fecher and Sönke Bartling 2016 on systemic resistance to blockchain-typical solutions in research operations).
For this reason, the potential of blockchain-based smart contracts in research, described by Buterin, is currently being explored by a mere handful of individuals. Besides James Littlejohn’s Dsensor project (Video of his lecture at the S3 Conference in Hannover in 2017), mention should be made of the latest example of Jure Triglav’s Replication Foundation, a DAO that seeks to help fund replication studies. The situation is slightly more promising in education, at least as far as applications for the certification of academic achievements is concerned. In this respect, a number of institutions such as MIT Media Lab and the Open University have taken on a leading role.
Akasha – example of a building block for decentralised applications beneficial to researchers
As described above, the development of blockchain-based smart contracts for the application area of research is clearly at the pioneer stage. However, building blocks exist that were not developed with researchers in mind, but which are suitable for some of the specific challenges in research nonetheless.
Mihai Alisie, one of the co-founders of Ethereum, launched Akasha, a completely decentralised social network. In the public alpha release of the software, it is already possible to write simple blog posts, to comment on them, to follow other network participants and to chat with them, for example (see figure). The comparatively sleek user interface of the alpha version, launched in spring 2017, is particularly striking. It is also remarkable to note that no new blockchain was launched in this case. Comparatively well established, popular modular systems were used instead: Ethereum and the P2P file system IPFS. Ultimately, Akasha could turn out to be an essential building block of an alternative to ResearchGate and similar.
An aside: blockchain-based smart contracts in view of the conflict on Open Access transformation in scientific publishing
The above models of a self-executing smart contract on the digital provision of scientific objects that can be monitored reliably and publicly seem at first sight to be child’s play. At least, one could wonder whether the technical effort and the risks involved are worth transferring such innovations to an area that already possesses digital solutions that have evolved following decades of development.
A global debate on the Open Access transformation of scientific publishing has now been raging for two years. In short, this debate is about the fact that the big commercial journal publishers (Elsevier, Springer Nature, Wiley, etc.) had refused for decades to switch to Open Access, i.e. free access to mainly publicly funded research results on the web.
This strategy changed when funding agencies, scientific societies, entire countries and the European Union started making Open Access a condition for the funding of research projects. Now, even traditional players publish Open Access to a great extent. Since shareholders’ profit expectations have skyrocketed in this sector in recent years, authors and/or their institutes are required to pay high fees for this service. However, publishers are occasionally willing to credit these fees against subscription fees, which are still charged for most content. And all stakeholders know that there is generally “enough money in the system” to finance the publication of all articles at the usual level of quality. However, since there are no reasons why publishers should disclose their costing, it is not known at present whether and how it would be possible to finance the transition of all scientific publishing based on the established players’ business models without soaring costs.
Recommended literature on the background of the transformation debate:
- Tullney, Marco. (2016). Herausforderungen der Open-Access-Transformation. Zenodo. https://doi.org/10.5281/zenodo.255766
- Jahn, Najko, and Tullney, Marco. (2016) A study of institutional spending on open access publication fees in Germany. PeerJ 4:e2323 https://doi.org/10.7717/peerj.2323
- Ad-hoc workgroup Open-Access-Gold in the priority initiative “Digital Information” of the Alliance of Science Organizations in Germany (Ed.) (2016): Recommendations for the Open Access Transition: Strategic and practical anchorage of Open Access in informational provisioning of research institutions. Goescholar. https://doi.org/10.3249/allianzoa.012
Consequently, a handful of private companies are now being paid from public resources to provide a non-transparent array of services for the scientific community. As a result, this investment of public money is reflected in record profits for company shareholders. Against this backdrop, self-executing smart contracts on modularised services for the scientific community that can be monitored reliably and publicly would be an interesting alternative.
At least they could increase transparency and traceability, promoting a fair and sustainable funding model for Open Access. Far from some commercial Open Access models, the services required in publishing – document conversion, archiving, indexing, commenting, translation, etc. – could be decentralised more reliably.
Acknowledgements and response to criticism
Some of the thoughts on smart contracts in research outlined in this post are based on a discussion I had with James Littlejohn (see above), Sönke Bartling, Helge Holzmann and Angelina Kraft following the Software and Services for Science (S3) Conference in Hannover in May 2017, and another discussion I had with Sebastian Posth, Titusz Pan and Felix Saurbier. Marco Tullney made important remarks on the first version of this article – I am grateful to all of you.
I would also like to thank David S. H. Rosenthal for his detailed critique of part one of my series of articles about blockchains in the TIB Blog. In his critique, Rosenthal also refers to a noteworthy article by Chris H. J. Hartgerink that was published at around the same time, containing similar proposals. Rosenthal is an outstanding pioneer in the area of archiving digital scientific objects. It goes without saying that some of his points of criticism make more sense to me than others.
For instance, Rosenthal emphasises weaknesses of the hashing method in two places, backing up his claim with the SHA-1 collisions identified at the beginning of 2017. This can be countered by the assessment of cryptography expert Bruce Schneier. Schneier points out that this problem had been looming for many years – which is why NIST declared the algorithm SHA-3 the new standard in 2012. The P2P file system I mentioned – IPFS – consciously decided back in 2014 not to use SHA-1 – for the explicit reason that potential problems were already known at that time.
Epilogue: P2P networks have come of age, overcoming their teething troubles
Rosenthal’s criticism that P2P networks are inappropriate for the reliable provision of services is clearly more serious than the issue of SHA-1 collisions. One of his points of criticism is the concentration of power on individual nodes within networks – the behaviour of which spells incalculable risks for the functioning of the entire network. He also argues that such networks offer too few incentives for many individual participants to make a positive contribution to the overall performance of the network.
As such, Rosenthal highlights a typical weakness of many traditional P2P network architectures. However, the extent to which this holds for blockchain-based smart contracts is unclear to me from this criticism. As outlined above, these innovative architectures enable decentralised applications to be executed in a manner that is verifiable, transparent to the public and de facto unstoppable. This ensures reliability, even if network performance is concentrated in the hands of a few participants. This also essentially covers the incentive problem. After all, as shown above, the use of smart contracts enables
- proof to be given that service providers have provided a service (financed outside the blockchain)
- the increased involvement of additional “small” service providers and
- the channelling of funding for services via crypto tokens –
- either crypto tokens that are traded on the free market such as Ethereum or MaidSafe
- or crypto tokens that are shielded from speculative trade on a “permissioned blockchain”.
In short: the teething troubles of P2P systems anticipated by Rosenthal have largely been overcome at the level of elementary technical concepts. This is reflected in the wide range of applications of products such as Ethereum and Hyperledger in industry and trade. The implementation of these concepts in application areas such as education, research and cultural heritage has only just begun.
2 Antworten auf “Provision of scientific objects using smart contracts on a blockchain – how and why?”