While day one was focused more on issues of scaling, encryption, and incentives, day two put the focus back on concrete use cases of Swarm for dappp development.

We kicked off the second day of Swarm Orange Summit with a set of two talks that focused on Swarm’s deployments and devops / infrastructure management. Anton Evangelatov and Rafael Matias were the first to take the stage. Their talk was centred on running and testing large-scale Swarm deployments on a kubernetes cluster with bundled tracing, log aggregation, metrics, and dashboards – using tools such as OpenTracing, Jaeger, InfluxDB and Grafana.
They walked the audience through the steps how each of the tools work and how to use them, and concluded with a live demo, launching a test cluster of 1,200 Swarm nodes.

Next up was Mainframe’s Camron G. Levanger who delved a bit deeper into Kubernetes and how developers can use Terraform to help them deploy infrastructure for a Swarm cluster. He also presented practical code examples for the Helm and Tiller programs that can help developers configure and deploy Kubernetes clusters on AWS.

Injecting Swarm into concrete products

After the presentations on Swarm in cloud infrastructure, DappNode’s Eduardo Antuña gave a demonstration how they have embedded Swarm into DappNode, bringing Swarm into the home. He extolled the benefits of running  your own nodes at home and showed the audience how to install the DappNode package and how to install Swarm and other dapps running on Swarm on the dappnode. Specifically, he demonstrated Datafund’s Fairdrop as an example.

After a short break it was time for project sessions, starting with Mainframe.com. Miloš Mošić introduced the room to the Erebos JavaScript client for Swarm. Regardless of the audience’s feelings towards JavaScript, Miloš first walked everyone through the Swarm functions included in the Erebos library. In the second part, he focused on the Timeline protocol
which Mainframe created in collaboration with the Epic labs team and how it can be used for common application needs.
Continuing with Mainframe’s colours was Shane Howley who presented their smooth MainframeOS and how it hooks to Swarm. Using a demo video, Shane showed the audience the basic workings of the OS and what it can do, for example, how it uses feeds and PSS for a true chat dapp. As he explained, the OS relies heavily on Swarm and its functions, for communication between individuals and storage of dapp data.

Just before the lunch break the project sessions were concluded with Datafund and their presentation of Multibox and the Fairdrop dapp. It’s no easy feat building dapps this day and the idea behind the fds.js library was to make their creation and deployment as easy as in Web 2.0, explained Datafund’s developer Dan Nickless. Right now the challenges, among others, for Fairdrop are how to create push notifications, send very large files and user acquisition, Dan listed. He then demonstrated how the Fairdrop code works in real-time and how dapps that use fds.js can use the same file storage and share that data.

He concluded his presentation with a quick overview of Multibox, a solution for a systematic categorisation of data. Tadej Fius took the baton from there and showed the room how Multibox categorises data from different data sources and the difficulties that are currently holding it back.

Climbing the obstacles

When lunch started settling down, the difficulties of decentralized databases were served on the talks menu. Dmitry Kurinskiy looked at how Fluence and Swarm could be used as an external persistent storage to provide additional security for a decentralized data processing. To have a database we need storage and computation, Kurinskiy noted. Fluence guarantees correctness of computations performed in a decentralized network, and at the same time allows building applications with throughput and latency similar to the centralized ones. On the other hand, Swarm could provide the independent, decentralized storage that can guarantee data availability at an efficient cost. 
There aren’t many storage solutions that would enable private and secure transfer of large files, Zahoor Mohamed from Datafund began his talk. That’s what they had in mind when attempting to upload 100 GB of data to Swarm. As Zahoor detailed in his talk, a new algorithm wasn’t needed but certain bugs had to be overcome, like manually changing the default number of chunks per node. After that, the upload was possible although the upload time remained problematic for which Zahoor proposed a few solutions, such as larger chunk size.
Swarm feeds were mentioned several times during the first two days of the summit and they were the main topic for Epic Labs’ Javier Peletier. Feeds enable Dapp developers to write applications that allow users to find, update and retrieve content, proving ownership with a signature, but without having to resort to interacting with the blockchain. The focal point of his presentation was a more precise explanation of the algorithm that checks if the content at the certain address was updated. 
Last up for the day was Louis Holbrook that looked at how the PSS messaging system and feeds work together to create dynamic data flows over Swarm. He sketched ideas of how we might use feeds and pss for a decentralised multi-user chat dapp.