Both sites allowed people to churn out simple scripts to scrape data from wherever they wanted, and forget about the infrastructure supporting it all. eg. task scheduling, data storage, server monitoring, etc.
At the same time, the platform keeps the data open. You can't make private datasets on morph. Your scraper is opening that data for everyone. And, connecting you with users of that data (users who download are listed):
From the docs:
By being able to see openly who is using what, we aim to promote collaboration and serendipity. Creating or scraping data is important, but it's people using it that really makes it exciting. Showing who downloads what, connects people making scrapers with those who use the data.
Many of the scapers are aiming to make hidden or obscure data more accessible. eg. the planning alerts scrapers, normalising access to the myriad of local council websites across the country.
Originally wanted to help out a bit, and fix a few failing scrapers. It feels like a pleasant, open data side project.
It occurs though, that the various blobs of data that each scraper is curating, can be strung together, using the morph.io api.