We’ve a duty of care to the makerspaces we’re researching, to how the dataset represents spaces and their communities, and how they are subsequently perceived. We’ve really appreciated the feedback and conversations so far, they’ve allowed us to create our core makerspace survey.
One way to navigate these responsibilities and tensions is with a protocol that ensures rigor in how we gather information and then how we use it. For that purpose we are using the ODI’s (Open Data Institute’s) Open Data Self Assessment Questionnaire, to guide us through the key questions and best practice responses. For example, using a Contributor License Agreement to ensure people are knowingly giving permission to use their contributions in this dataset.
There is less obvious guidance as well; even things like using CSV file format (which we’d decided on as ‘lowest common denominator machine readable format’) needs documentation to allow usability. How we ask the questions in our survey will affect how the answers can be interpreted as structured data. We need to ensure that the data set has a ‘schema’ - a list of what standard terms our dataset uses, and details of what format users can expect the answer responses to be in.
Fundamentally though, we want this dataset to be usable and useful. It is easy to get lost in a rabbit hole of standards and formats, schemas and structured data conventions. We don’t want to obscure a very useful dataset in an new format that isn’t yet the standard, but equally, for some of our questions we may need to adapt and stretch the current standards, and we want to do that responsibly and in a way that reflects the needs of the given use case.
We’ve appreciated the conversations we’ve had with people who’ve built similar datasets and have taken on board their advice. Likewise, talking to people about possible uses has shown us that to be useful, we need to ensure the dataset is interoperable with existing platforms and services, as well as allowing for new ones to be built on it.
As such, we reflect frequently on use cases; curiosity alone isn’t enough to justify the inclusion of a question; we have to be convinced that when aggregated with other responses from a breadth of makerspaces that information will offer valuable information and insight.
Following the feedback we gathered on the data points, we’ve amended the questions and turned them into a survey. We've included questions about accessibility, as clearly that will affect who can use the space and how frequently. We’ve also included questions about what open source projects makerspaces maintain, as in aggregate this information shows the breadth of activity and demonstrates how each makerspace is used and supported by its community. Recognising that some of the demographic data may not be tracked, we’ve amended how we ask that question. Simple questions like does your space have a kitchen or shared social space tells both potential users, and equally researchers about the nature of a given makerspace. We’d like to express our thanks for your thoughts and contributions to this, it has been much appreciated.