The first day of Swarm Orange Summit 2019 was marked by broader questions of how to scale the network and in the search for possible solutions.

Swarm’s mission is to provide censorship resistant storage and communication infrastructure for a sovereign digital society – emphasized Swarm’s team lead, Viktor Tron, opening the three days of talks and presentations. To make that possible, the team working hard to make the network stable, scalable and ultimately economically sustainable.

Building this kind of a storage and communication layer raises certain questions that need to be solved, Tron added, such as discoverability (how can nodes find each other in the network), imposition (how a recipient can expect to receive messages from a sender), interactivity (what throughput and latency is tolerated/required) and privacy (how much information is leaked).

As Viktor pointed out, the underlying decentralized node architecture of Swarm enables it to have a certain type of an overlay network which in turns enables different messaging strategies. The current structure already enables some services that use these strategies, e.g. chats or notifications.
Continuing his talk, he explained that as a storage layer Swarm works as a distributed chunk store. This means that data is stored and transmitted in chunks, which are immutable and can be verified locally. A chunk-based content delivery enables Swarm to have some very interesting services that are a step away from today’s centralized counterparts: anonymous websites, blogs or dapps, verified content delivery, zero-leak communication channels, forums, offline mailboxing etc.

How to drive participation? Incentives.

Next up was Racin Nygaard from University of Stavanger. His talk focused on motivating increased participation with incentives. He began with explaining the concept of proof of storage that verifies that the nodes in the system are behaving desirably.

The logic goes, Nygaard said, if we have a reliable proof that certain nodes are behaving accordingly and have a certain reputation, we can, with proper incentives and low entry barriers, motivate more people to run a storage node, which are a crucial part of decentralization. But at the moment we have mostly centralized incentives models that are not really suitable for would be node owners. The benefit of a decentralized model is that data owners and storage nodes can agree on data access terms directly through a smart contract, not via a central authority that sets fix pricing models, Nygaard concluded.

Artificial scarcity that current copyright enforcement creates is becoming increasingly inadequate, Daniel A. Nagy from Swarm began the following talk, titled Decentralized social media incentives. Additionally, he believes, the advertising entwined in the model doesn’t support creators in the best way and has become intrusive.

In this high-level overview Nagy looked at some possible solutions to the problem and their shortcomings. According to him, we have some transitional models, like Spotify, Yandex.music and Steemit, that have different benefits, e.g. incentives, analytics and ranking. Unfortunately, in some cases these incentives are very difficult to implement in a decentralized world and come with some critical shortcomings.
We also have emerging alternatives, such as the crowdfunding model and ranking-based funding and there are some radical solutions, that Nagy proposed. An example of more radical approach would be a collaborative creation, component library (some components are already done and can be reused, e.g. a movie script) and forks.

Making the network more resistant and scalable

After the lunch break Vero Estrada-Galiñanes talked the crowd through entanglements and why they would make a good feature for Swarm. She demonstrated how a storage network can increase fault- tolerance and security through alpha entanglement codes that can mix files in a system. Their main benefit, which Vero demonstrated, in a p2p system is that in an environment of poor peer availability these entanglements are less affected by it since they need only a few data blocks to repair failures.

How can we scale sharing of encrypted data, was the question that Michael Egorov from NuCypher addressed in his talk. In it he compared different ways to share encrypted data: one way is to use “updateable trees of keys”, another is to use proxy re-encryption. The answer, which one to use, is different, depending on scalability (in number of users, volume of data and time) requirements, as well as a threat model, Egorov pointed out.

No data persistency, no data

Just before the last talk of the day Daniel Nagy addressed a hot topic in Swarm — persistency of data. What makes persistency difficult in Swarm is foremost churn, meaning nodes join and leave the network, stressed Nagy. Even when nodes leave data needs to be moved around and since nodes join and leave the network constantly, what should be edge circumstances are actually normal operating conditions.

There are also the problems of spam, which makes other data unavailable and there is an inherent tension in making Swarm censorship-resistant as well as spam-resistant. He discussed the problems of opportunity costs and misaligned incentives (less profitable content has less ingrained incentives to be stored, while on the other side there’s the tragedy of the commons where if everyone gets paid for storing some might want to offload storing less profitable data to others in favour of more profitable one). Nagy believes that the SWAP incentivisation protocol alleviates some of these concerns.

Ralph Pichler rounded up the roster of talks for the day with the updates that were implemented into the Swap, Swear and Swindle framework in the year following the last Swarm Orange Summit.