Integrations
What's new?
In-Product Prompts
Participant Management
Interview Studies
Prototype Testing
Card Sorting
Tree Testing
Live Website Testing
Automated Reports
Templates Gallery
Choose from our library of pre-built mazes to copy, customize, and share with your own users
Browse all templates
Financial Services
Tech & Software
Product Designers
Product Managers
User Researchers
By use case
Concept & Idea Validation
Wireframe & Usability Test
Content & Copy Testing
Feedback & Satisfaction
Content Hub
Educational resources for product, research and design teams
Explore all resources
Question Bank
Maze Research Success Hub
Guides & Reports
Help Center
Future of User Research Report
The Optimal Path Podcast
User Research
Jan 23, 2024
How to build a UX research repository (that people actually use)
Extend the shelf life of your research and set your team up for long-term success with a robust research repository. Here’s how to build yours from scratch.
Ella Webber
Every UX research report was once a mountain of raw, unstructured data. User research repositories help collate that data, disseminate insights, democratize research, and spread the value of user research throughout your organization.
However, building (and maintaining) an accessible user research repository is no simple task. Getting people to use it is a whole other ball game.
In this guide, we’ll break down the specifics of user research repositories, some best practices and the benefits of building your own research library, plus how to get started, and our favorite examples of robust research repositories.
Fill your research repository with critical user insights
Drive business success and make informed decisions with Maze to extract valuable insights from user research
What is a research repository in UX research?
A user research repository is a centralized database which includes all your user research data, UX research reports , and artifacts. Different teams—like design, product, sales, and marketing—can find insights from past projects to contextualize present scenarios and make informed decisions.
Storing all your research data in a single place ensures every team has access to user insights and can use them to make research-driven decisions. Typically maintained by a research operations team, a well-structured research repository is an important step toward breaking down silos and democratizing user research for the entire organization.
If you’re looking to improve research maturity across your organization and start scaling UX research , building a watertight user research repository is your first step.
What’s included in a research repository?
Building a UX research repository can be challenging. Between compiling all the data, creating a collaborative space, and making it easily accessible to the teams who need it, you might be struggling to identify a start point.
Here’s a checklist of all the essentials to streamline the setup:
✅ Mission and vision ✅ Research roadmap ✅ Key methodologies ✅ Tools and templates ✅ Research findings ✅ Raw data and artifacts
Mission and vision
Whether you have a dedicated user research team or involve multiple departments in the UX research process , you need a clear mission and vision statement to create a shared purpose and foster collaboration. Not only should you include your wider UX research strategy and vision, but a ‘North Star’ for your repository, too.
For example, the mission statement for your repository could be, “Streamline our UX design process and promote informed decision-making with a centralized hub of user feedback and insights.”
Research roadmap
A clear UX roadmap makes it easy to prioritize your research efforts and seamlessly organize your repository. It analyzes your objectives and outlines all upcoming projects in a given timeline. You can use this roadmap to catalog your previous research campaigns and plan ahead .
Key methodologies
You should also list all the research methods you follow to create repeatable success. You can save SOPs for different methodologies to minimize the scope of error and set your team members up for success. Mia Mishek , UX Research Operations Program Manager at Pax8 , explains:
“Every repository should include common documents related to the research at hand, such as a brief, moderation guide/test script, and readout. Having all the documents easily accessible allows others to cross-reference while consuming research and use past research as a jumping-off point for further research.”
Tools and templates
Create a list of collaboration and product management tools for different steps in the product research process , such as usability testing , interviews, note-taking, data analysis, and more. Outline these and don’t forget to give quick access links to all your UX research tools .
Outlining instructions and key templates for specific research methods or analysis techniques can be useful. Consider including any tried-and-tested question repositories or best practices.
Research findings
Your repository should include a set of findings from every study. While you can add the final reports for all projects, it’s also a good practice to add quick takeaways and tags to make your collection easily searchable.
If you’ve conducted different types of analysis, it’s worth linking these here, too. Whether that’s a photo of your thematic analysis workshop, a walkthrough video of your results, or a link to digital affinity diagram.
Raw data and artifacts
Alongside research reports, you can store all the raw data from each study, like user interview recordings and transcriptions. Your team members can revisit this data to plan upcoming projects effectively or connect the dots between past and present insights.
Depending on how you store this, you may want to consider keeping piles of raw data in a ‘view only’ or locked area of the repository, to avoid risk of accidental tampering or deletion.
What are the benefits of a research repository?
User research is an ongoing process. The trickiest part for most teams when pursuing continuous research is breaking down silos and establishing a democratized approach to prevent wasteful overlap, unnecessary effort, and a lack of knowledge-sharing.
A good research repository fosters a culture of collaboration and supports user-centric design through collectively prioritizing and understanding your users.
Here are a few core benefits of building a user research repository:
Quickly access user research data
An easily searchable UX research repository makes it easy to filter through a mountain of data and find specific insights without pouring hours into it. Mia emphasizes the importance of making the information easily accessible:
“You should be able to go into the repository, understand what research has been done on X topic, and get the information you’re after. If you need someone else to walk you through the repository, or if there’s missing information, then it’s not doing its job.”
By creating a self-serve database, you can make all the data accessible to everyone and save time spent on reviewing prior research to feed existing efforts.
Inspire ideas and prioritize future research
A research repository can also help in identifying knowledge gaps in your existing research and highlight topics worth further exploration. Analyzing your past data can spark ideas for innovative features and guide your research efforts.
Different teams can utilize a research repository to help guide the product roadmap on areas that still need to be explored in the app, or areas that need to be revisited.
Mia Mishek , UX Research Operations Program Leader at Pax8
Build a shared knowledge library
One crucial advantage of a repository is that it helps democratize user research. Not only does it highlight the value of research and showcase the efforts of your product and research teams, but by centralizing research findings, you’re making it easier for everyone to make data-informed, user-centric decisions.
A research repository also provides versatility and other use cases to your research insights—from product managers to sales leaders, all stakeholders can access user insights for making research-driven decisions across the organization. Whether that’s informing a sales pitch, product roadmap, or business strategy; there’s endless applications for UX research.
This practice of knowledge-sharing and democratizing user insights is a big step in building a truly user-centered approach to product development.
Contextualize new data with past evidence
Your repository records all the raw data from past projects, making it easier to compare and contrast new findings with previous user research. This data also allows researchers to develop more nuanced reports by connecting the dots between present and past data.
Mia explains how these repositories cut down on the redundant effort of trying to dig up old research data on any topic: “A repository benefits UX researchers and designers because it’s not uncommon to ask what research was done on XYZ area before conducting more research. No one wants to do reductive work, so without a repository, it’s easy to forget past research on similar topics.”
What’s more, research libraries avoid the same research being repeated; instead allowing as many people as possible to benefit from the research, while minimizing the resources and time used.
4 Best research repository tools and templates
You don’t need a specialized tool to create a user research repository. A well-organized, shared Google Drive or Notion teamspace with detailed documentation can be just as effective. However, if you can, a dedicated tool is going to make your life a lot easier.
Here are four research repository tools to consider for storing existing and new research insights on, and working cross-functionally with multiple teams.
1. Confluence
Confluence is a team workspace tool by Atlassian that streamlines remote work. You can use this platform to create research docs from scratch, share them with your team, and save them for future reference. Plus, the tool lets you design wikis for each research study to organize everything—raw data, findings, and reports—in a structured manner.
You also get a centralized space to store data and docs from extra accounts, so multiple people can contribute to and access your repository.
Condens is a centralized UX research and analysis platform for storing, structuring, and analyzing user research data–and sharing those insights across your organization. You can collaborate on data analysis, create pattern recognition, and create artifacts for comprehensive outcomes.
With a detailed research repository guide to help you on your way, it's a great tool for teams of any size. Plus, you can also embed live Maze reports, alongside other UX research and analysis tools.
3. Dovetail
Dovetail is a user research platform for collecting, analyzing, and storing research projects. You can save and retrieve all documents from a single database, while tags, labels, and descriptions also simplify the task of cataloging past data.
The platform gives you a strong search function to quickly find any file or data from the entire hub. You can also use multiple templates to migrate data from different platforms to Dovetail.
4. Airtable
Airtable is a low-code tool for building apps that enables you to create a custom database for your UX research projects. It’s ideal for product teams looking to set up the entire repository from scratch because you need to configure everything independently.
You get a high degree of flexibility to integrate different data sources, design a customized interface, and access data in dynamic views. What’s more, you can build an interactive relational database to request resources from others and stay on top of the status of existing work.
Here’s a research repository database to get started.
Creating a UX research repository: 5 Best practices
Designing a bespoke repository to organize your research requires careful planning, a thorough setup workflow, and continuous maintenance. But once it’s ready, you’ll wonder how your product team survived without it. To get you started, here’s our five best practices to implement this process effectively and kickstart your repository.
1. Define clear objectives for your repository
Start by outlining what you want to achieve with a shared research library. You might want to standardize research methodologies across the board or build alignment between multiple teams to create more consistent outputs.
This goal-setting exercise gives all team members a purpose to pursue in upcoming projects. When they know what success looks like, they can strategically plan research questions and choose analysis methods.
Knowing your objectives will also help shortlist the best research and usability testing tools . You can invest in a good platform by evaluating a few core capabilities needed to achieve your goals (more on that shortly).
2. Create a structure and define taxonomy
You can structure your UX repository as a database with multiple fields. For example, here are a few fields to easily categorize responses when documenting user experience research:
- Key insights
- User quotes
- Criticality
- Sources of knowledge
- Possible solutions that were considered
Besides creating a structure to document a research study, you also need a well-defined taxonomy to help people find information. Defining your research taxonomy will help you categorize information effectively and design consistent naming conventions.
For example, you can create a set of predefined categories for every research study like:
- Focus country: USA, Australia, Canada, France
- Collected feedback: Feature request, feature enhancement, bugs
- Methodology: Usability testing, user interview, survey
- User journey stage: Before activation, power user, after renewal
💡 Less jargon, more alignment
Involve multiple stakeholders when defining the terminology for your library, and check it aligns with any internal Style Guides or glossaries. This ensures alignment from the outset, and makes it easy for everyone to filter results and find what they need.
3. Distribute knowledge through atomic research
Atomic research is an approach to UX research that prioritizes user research data organization. It proposes that you conduct research so that every piece of the project becomes easily reusable and accessible to all stakeholders.
According to the atomic research approach , you need to consider four components to organize your repository:
- Experiments (We did this): Explain the research methodology and the steps you followed in conducting the study
- Facts (We saw this): Document the main findings evident from the data gathered in the study
- Insights (Which made us think): Capture the key insights extracted from analyzing the research data
- Opportunities (So we did that): List the decisions and action items resulting from the research analysis
Using atomic research, you can create nuggets to organize information in your repository.
Nuggets are the smallest unit of information containing one specific insight, like a user quote, data point, or observation. The different types of nuggets to categorize your research data include observations , evidence , and tags . By breaking down a vast study into smaller nuggets, you can make your repository informative at a glance. You can use your defined taxonomy to label these nuggets.
4. Identify the creators and consumers in your team
Before outlining your repository’s structure, you need to define workflows for creating, reviewing, and maintaining the library. Spend some time defining who will:
- Own the setup process and create the overall guidelines
- Access past documents and add contributions consistently
- Maintain the documents for easy accessibility
- Only need to access customer insights
Assigning these roles makes it easy to estimate your team's bandwidth for building and maintaining such a massive library. You can also manage permissions in your repository platform to give everyone access to relevant materials and protect confidential resources.
Mia explains why this is important to make your repository more meaningful for end-users:
“You need to keep in mind the JTBD (jobs to be done) framework when building a repository. What do the folks accessing your repository need to do? Who are those people? You need to build your repository with the purpose of those distinct users.”
5. Shortlist and finalize tools based on your goals
When evaluating different research repository tools, consider your requirements and compare different platforms against the essential features you need for this repository. If you’re creating one for the first time, it’s okay to create an experimental setup to understand the impact.
Here are a few key factors to consider when shortlisting research repository tools:
- Ease of setup and use: Choose a platform with a gentle learning curve, especially if you have a big team with multiple members. A quick setup and user-friendly interface can maximize adoption and make your repository more accessible.
- Collaboration capabilities: A good repository lets you interact with different team members through comments, chat boxes, or tags. You can also manage permissions and set up different roles to share relevant research with specific stakeholders and team members .
- Tagging and searchability: Your repository is only as good as its ability to show precise search results for any keyword. Consider the ease of labeling new information and test the search function to check the accuracy of the results.
- Export and integrations: You’ll need to export some data or streamline your entire research ops setup by integrating different tools. So, evaluate each tool’s integration capabilities and the options to export information.
Plus, your ideal tool might be a combination of tools. For example, Steven Zhang , former Senior Software Engineer at Airtable, used a combination of Gong and Airtable when first building a UX research repository . It’s about considering your needs and finding what works for your team.
Democratize user research in your organization
A UX research repository gives you easy access to insights from past projects, and enables you to map new insights to old findings for a more nuanced understanding of your users.
More importantly, building a single source of truth for your entire organization means everyone on your team can access research data to inform their projects.
Different teams can use this data to make strategic design decisions, iterate product messaging, or deliver meaningful customer support.
Sound good? That’s what we thought—build your repository today to evangelize and democratize UX research in your organization.
Need a seamless solution to collect meaningful research insights?
Maze helps you collect and analyze research to find purposeful data for your product roadmap
Frequently asked questions about UX research repository
How do I create a user research repository?
You can create a user research repository with these best practices:
- Define clear objectives for your repository
- Create a structure and define taxonomy
- Distribute knowledge through atomic research
- Identify the creators and consumers in your team
- Shortlist and finalize tools based on your goals
What makes a good research repository?
A good research repository tells the team's mission and vision for using research. It's also easily searchable with relevant tags and labels to categorize documents, and includes tools, templates, and other resources for better adoption.
What’s the purpose of a research repository?
A research repository aims to make your UX research accessible to everyone. It democratizes research operations and fosters knowledge-sharing, giving everyone on your team access to critical insights and firsthand user feedback.
How to build a research repository: a step-by-step guide to getting started
Research repositories have the potential to be incredibly powerful assets for any research-driven organisation. But when it comes to building one, it can be difficult to know where to start. In this post, we provide some practical tips to define a clear vision and strategy for your repository.
Done right, research repositories have the potential to be incredibly powerful assets for any research-driven organisation. But when it comes to building one, it can be difficult to know where to start.
As a result, we see tons of teams jumping in without clearly defining upfront what they actually hope to achieve with the repository, and ending up disappointed when it doesn't deliver the results.
Aside from being frustrating and demoralising for everyone involved, building an unused repository is a waste of money, time, and opportunity.
So how can you avoid this?
In this post, we provide some practical tips to define a clear vision and strategy for your repository in order to help you maximise your chances of success.
🚀 This post is also available as a free, interactive Miro template that you can use to work through each exercise outlined below - available for download here .
Defining the end goal for your repository
To start, you need to define your vision.
Only by setting a clear vision, can you start to map out the road towards realising it.
Your vision provides something you can hold yourself accountable to - acting as a north star. As you move forward with the development and roll out of your repository, this will help guide you through important decisions like what tool to use, and who to engage with along the way.
The reality is that building a research repository should be approached like any other product - aiming for progress, over perfection with each iteration of the solution.
Starting with a very simple question like "what do we hope to accomplish with our research repository within the first 12 months?" is a great starting point.
You need to be clear on the problems that you’re looking to solve - and the desired outcomes from building your repository - before deciding on the best approach.
Building a repository is an investment, so it’s important to consider not just what you want to achieve in the next few weeks or months, but also in the longer term to ensure your repository is scalable.
Whatever the ultimate goal (or goals), capturing the answer to this question will help you to focus on outcomes over output .
🔎 How to do this in practice…
1. complete some upfront discovery.
In a previous post we discussed how to conduct some upfront discovery to help with understanding today’s biggest challenges when it comes to accessing and leveraging research insights.
⏰ You should aim to complete your upfront discovery within a couple of hours, spending 20-30 mins interviewing each stakeholder (we recommend talking with at least 5 people, both researchers and non-researchers).
2. Prioritise the problems you want to solve
Start by spending some time reviewing the current challenges your team and organisation are facing when it comes to leveraging research and insights.
You can run a simple affinity mapping exercise to highlight the common themes from your discovery and prioritise the top 1-3 problems that you’d like to solve using your repository.
💡 Example challenges might include:
Struggling to understand what research has already been conducted to-date, leading to teams repeating previous research
Looking for better ways to capture and analyse raw data e.g. user interviews
Spending lots of time packaging up research findings for wider stakeholders
Drowning in research reports and artefacts, and in need of a better way to access and leverage existing insights
Lacking engagement in research from key decision makers across the organisation
⏰ You should aim to confirm what you want to focus on solving with your repository within 45-60 mins (based on a group of up to 6 people).
3. Consider what future success looks like
Next you want to take some time to think about what success looks like one year from now, casting your mind to the future and capturing what you’d like to achieve with your repository in this time.
A helpful exercise is to imagine the headline quotes for an internal company-wide newsletter talking about the impact that your new research repository has had across the business.
The ‘ Jobs to be done ’ framework provides a helpful way to format the outputs for this activity, helping you to empathise with what the end users of your repository might expect to experience by way of outcomes.
💡 Example headlines might include:
“When starting a new research project, people are clear on the research that’s already been conducted, so that we’re not repeating previous research” Research Manager
“During a study, we’re able to quickly identify and share the key insights from our user interviews to help increase confidence around what our customers are currently struggling with” Researcher
“Our designers are able to leverage key insights when designing the solution for a new user journey or product feature, helping us to derisk our most critical design decisions” Product Design Director
“Our product roadmap is driven by customer insights, and building new features based on opinion is now a thing of the past” Head of Product
“We’ve been able to use the key research findings from our research team to help us better articulate the benefits of our product and increase the number of new deals” Sales Lead
“Our research is being referenced regularly by C-level leadership at our quarterly townhall meetings, which has helped to raise the profile of our team and the research we’re conducting” Head of Research
Ask yourself what these headlines might read and add these to the front page of a newspaper image.
You then want to discuss each of these headlines across the group and fold these into a concise vision statement for your research repository - something memorable and inspirational that you can work towards achieving.
💡Example vision statements:
‘Our research repository makes it easy for anyone at our company to access the key learnings from our research, so that key decisions across the organisation are driven by insight’
‘Our research repository acts as a single source of truth for all of our research findings, so that we’re able to query all of our existing insights from one central place’
‘Our research repository helps researchers to analyse and synthesise the data captured from user interviews, so that we’re able to accelerate the discovery of actionable insights’
‘Our research repository is used to drive collaborative research across researchers and teams, helping to eliminate data silos, foster innovation and advance knowledge across disciplines’
‘Our research repository empowers people to make a meaningful impact with their research by providing a platform that enables the translation of research findings into remarkable products for our customers’
⏰ You should aim to agree the vision for your repository within 45-60 mins (based on a group of up to 6 people).
Creating a plan to realise your vision
Having a vision alone isn't going to make your repository a success. You also need to establish a set of short-term objectives, which you can use to plan a series of activities to help you make progress towards this.
Focus your thinking around the more immediate future, and what you want to achieve within the first 3 months of building your repository.
Alongside the short-term objectives you’re going to work towards, it’s also important to consider how you’ll measure your progress, so that you can understand what’s working well, and what might require further attention.
Agreeing a set of success metrics is key to holding yourself accountable to making a positive impact with each new iteration. This also helps you to demonstrate progress to others from as early on in the process as possible.
1. Establish 1-3 short term objectives
Take your vision statement and consider the first 1-3 results that you want to achieve within the first 3 months of working towards this.
These objectives need to be realistic and achievable given the 3 month timeframe, so that you’re able to build some momentum and set yourself up for success from the very start of the process.
💡Example objectives:
Improve how insights are defined and captured by the research team
Revisit our existing research to identify what data we want to add to our new research repository
Improve how our research findings are organised, considering how our repository might be utilised by researchers and wider teams
Initial group of champions bought-in and actively using our research repository
Improve the level of engagement with our research from wider teams and stakeholders
Capture your 3 month objectives underneath your vision, leaving space to consider the activities that you need to complete in order to realise each of these.
2. Identify how to achieve each objective
Each activity that you commit to should be something that an individual or small group of people can comfortably achieve within the first 3 months of building your repository.
Come up with some ideas for each objective and then prioritise completing the activities that will result in the biggest impact, with the least effort first.
💡Example activities:
Agree a definition for strategic and tactical insights to help with identifying the previous data that we want to add to our new research repository
Revisit the past 6 months of research and capture the data we want to add to our repository as an initial body of knowledge
Create the first draft taxonomy for our research repository, testing this with a small group of wider stakeholders
Launch the repository with an initial body of knowledge to a group of wider repository champions
Start distributing a regular round up of key insights stored in the repository
You can add your activities to a simple kanban board , ordering your ‘To do’ column with the most impactful tasks up top, and using this to track your progress and make visible who’s working on which tasks throughout the initial build of your repository.
This is something you can come back to a revisit as you move throughout the wider roll out of your repository - adding any new activities into the board and moving these through to ‘Done’ as they’re completed.
⚠️ At this stage it’s also important to call out any risks or dependencies that could derail your progress towards completing each activity, such as capacity, or requiring support from other individuals or teams.
3. Agree how you’ll measure success
Lastly, you’ll need a way to measure success as you work on the activities you’ve associated with each of your short term objectives.
We recommend choosing 1-3 metrics that you can measure and track as you move forward with everything, considering ways to capture and review the data for each of these.
⚠️ Instead of thinking of these metrics as targets, we recommend using them to measure your progress - helping you to identify any activities that aren’t going so well and might require further attention.
💡Example success metrics:
Usage metrics - Number of insights captured, Active users of the repository, Number of searches performed, Number of insights viewed and shared
User feedback - Usability feedback for your repository, User satisfaction ( CSAT ), NPS aka how likely someone is to recommend using your repository
Research impact - Number of stakeholder requests for research, Time spent responding to requests, Level of confidence, Repeatable value of research, Amount of duplicated research, Time spent onboarding new joiners
Wider impact - Mentions of your research (and repository) internally, Links to your research findings from other initiatives e.g. discovery projects, product roadmaps, Customers praising solutions that were fuelled by your research
Think about how often you want to capture and communicate this information to the rest of the team, to help motivate everyone to keep making progress.
By establishing key metrics, you can track your progress and determine whether your repository is achieving its intended goals.
⏰ You should aim to create a measurable action plan for your repository within 60-90 mins (based on a group of up to 6 people).
🚀 Why not use our free, downloadable Miro template to start putting all of this into action today - available for download here .
To summarise
As with the development of any product, the cost of investing time upfront to ensure you’re building the right thing for your end users, is far lower than the cost of building the wrong thing - repositories are no different!
A well-executed research repository can be an extremely valuable asset for your organisation, but building one requires consideration and planning - and defining a clear vision and strategy upfront will help to maximise your chances of success.
It’s important to not feel pressured to nail every objective that you set in the first few weeks or months. Like any product, the further you progress, the more your strategy will evolve and shift. The most important thing is getting started with the right foundations in place, and starting to drive some real impact.
We hope this practical guide will help you to get started on building an effective research repository for your organisation. Thanks and happy researching!
Work with our team of experts
At Dualo we help teams to define a clear vision and strategy for their research repository as part of the ‘Discover, plan and set goals’ module facilitated by our Dualo Academy team. If you’re interested in learning more about how we work with teams, book a short call with us to discuss how we can support you with the development of your research repository and knowledge management process.
Nick Russell
I'm one of the Co-Founders of Dualo, passionate about research, design, product, and AI. Always open to chatting with others about these topics.
Insights to your inbox
Join our growing community and be the first to see fresh content.
Repo Ops ideas worth stealing
Interviews with leaders
Related Articles
How to measure and communicate research impact and ROI
Repositories are not just for researchers
How top 1% researchers build UXR case studies
Navigating generative AI in UX Research: a deep dive into data privacy
Welcoming a new age of knowledge management for user research
Building a research repository? Avoid these common pitfalls
Tetra Insights
Research Repositories in Action: 6 Best Practices
Data in research is the lifeblood that fuels our decisions, insights, and innovations. To harness its power effectively, you need a robust research repository to store, organize, and manage that data. Whether you’re part of a bustling research team or a solo researcher, implementing best practices in your research repository can make a world of difference. Not only in terms of efficiency but collaboration and decision-making as well.
Let’s delve into these best practices and see how they can turn your research repository into a well-oiled machine.
Table of Contents
Understanding research repositories.
Before we dive into the best practices, let’s make sure we’re on the same page about what research repositories are.
A research repository, at its core, serves as a centralized hub for collecting and preserving your research data in various forms. This could encompass raw data, documents, research reports, presentations, datasets, and more. It’s akin to an organized library where each piece of information is carefully cataloged. Making it readily accessible to researchers, analysts, and stakeholders. Without a well-structured repository, valuable research assets might become scattered, leading to inefficiency and the risk of data loss.
These repositories act as the backbone of informed decision-making and knowledge sharing within organizations. They offer several key advantages.
The Benefits of Effective Research Repositories
1. enhanced collaboration.
A well-structured research repository promotes seamless collaboration among team members. You can share resources, insights, and findings, which is crucial when working on projects that require collective input.
2. Data Accessibility
Imagine having quick access to any research document or piece of data you need, precisely when you need it. A well-organized repository ensures that you can easily retrieve data, speeding up your research process.
3. Streamlined Decision-Making
With all your research assets in one place, decision-making becomes more straightforward. No more searching through endless folders or email chains to find that critical piece of information.
According to research conducted by AWS , 88% of online shoppers wouldn’t return to a website after a bad user experience. The ability to access and analyze user data efficiently can make all the difference in this context.
Now that we have a grasp of research repositories and their benefits, let’s explore six best practices that can help you master the art of managing your research assets.
Best Practice 1 – Define a Clear Taxonomy
A robust and well-defined taxonomy is the backbone of your research repository. It’s more than just categorization; it’s about structuring your repository to ensure that every piece of research data has a logical place. Consider the e-commerce industry. You can create parent categories like “Customer Data,” “Product Information,” and “Sales Statistics.” Within these, establish subcategories such as “Customer Feedback,” “Product Catalogs,” and “Monthly Sales Reports.”
This taxonomy not only organizes your data but also facilitates efficient retrieval. When it comes to the execution, many tools, including Tetra’s research repository, allow for customizable taxonomies that seamlessly align with your research objectives.
Best Practice 2 – Implement Version Control
Version control is your safety net against accidental overwrites, lost revisions, or unintentional data mishaps. You can take inspiration from popular version control systems like Git and apply similar principles to your research assets. Develop a standardized naming convention for your documents, such as “Report_v1,” “Report_v2,” and so on.
Utilize version control features offered by research repository tools like Tetra to easily track and manage different iterations of your documents. This way, you always know which version is the most recent and have access to historical changes, ensuring transparency and accountability in your research processes.
Best Practice 3 – Secure Data Access
Research repositories often contain sensitive and confidential data that must be safeguarded against unauthorized access or data loss. Take steps to limit access to authorized personnel by setting robust access controls. Establish backup and disaster recovery mechanisms to ensure that even in the event of a catastrophic data loss scenario, your research assets are recoverable.
Implement encryption and authentication measures to add an extra layer of security. A study by IBM showed data breaches can cost companies an average of $3.86 million. Secure your data to prevent such costly setbacks.
Best Practice 4 – Foster Collaboration
The true power of a research repository lies not just in its storage capacity but in its ability to facilitate seamless collaboration. Encourage cross-functional teamwork within your organization by providing a platform where team members can share insights, findings, and feedback directly within the repository.
This collaborative ecosystem promotes knowledge sharing, idea exchange, and the pooling of diverse skill sets. As the saying goes, “Two heads are better than one,” and by fostering collaboration, your research outcomes can benefit from multiple perspectives and expertise.
Best Practice 5 – Document Thoroughly
With documenting your research, the mantra is simple: document everything. From setting clear research objectives, and detailing data sources and methodologies, to documenting findings and insights, comprehensive documentation is a non-negotiable practice. It’s the foundation of credible research and informed decision-making.
A fully featured research repository tool like Tetra includes features that allow you to attach notes, comments, or annotations to specific documents. This enhances the depth of documentation. The next time you’re wondering about the origin of a particular data point or the rationale behind a specific approach, your comprehensive documentation will be your guiding light.
Best Practice 6 – Data Retrieval and Searchability
An often-underestimated aspect of a research repository is its searchability. Your data should not merely be stored; it should be easily retrievable. Tetra allows you to quickly locate the specific data you need, whenever you need it. Free of charge . This includes the ability to search by keywords, tags, and metadata.
Make sure your data is effectively indexed to enable efficient searching. If your team members are spending excessive time looking for data, it’s a sign that your repository lacks adequate searchability. Enhanced search capabilities, coupled with features like advanced filters and content indexing, significantly contribute to improving your research efficiency.
In Conclusion
An efficiently managed research repository can significantly impact your research capabilities. It streamlines your processes, fosters collaboration, and ultimately helps you make informed decisions. The key to success lies in understanding, implementing, and constantly improving these best practices. Your research assets are the foundation of your work—treat them with care, and they will serve you well.
At Tetra, we understand the importance of well-structured research repositories. Our tools and expertise can help you optimize your research repository management. Don’t miss the opportunity to enhance your research practices. Elevate your research with Tetra’s solutions.
Ready to Put These Best Practices Into Action?
Elevate your research management with Tetra’s Free Account. Experience the power of seamless organization and collaboration within your research repository. With Tetra, you gain access to:
- User-Friendly Taxonomy : Effortlessly categorize your research data with a clear and intuitive taxonomy system.
- Version Control : Easily manage and track changes in your research documents to prevent data loss.
- Robust Security : Keep your sensitive research data protected with Tetra’s top-tier security features.
- Collaborative Workspace : Foster teamwork and share insights with cross-functional collaboration.
- Comprehensive Documentation Tools : Capture and preserve all the necessary research details.
- Advanced Data Retrieval : Quickly and easily find the insights you need with advanced search and indexing capabilities.
Implement these best practices today by subscribing to Tetra’s Free Account. Take your research management to the next level.
Subscribe Now to organize, secure, and collaborate on your research assets effectively.
Want to Learn More? Join our Webinar!
Dive deeper into the world of research repositories with our exclusive webinar, “Stories of Success and Failure: Research Repositories in Action.” Join our CEO and founder, Michael Bamberger, as he guides you through key lessons learned from two compelling case studies.
Discover how leading organizations have harnessed the power of research repositories to drive success, and gain valuable insights from their experiences. Uncover the do’s and don’ts to effectively manage and leverage your research assets.
Don’t miss this opportunity to learn from real-world examples and enhance your research repository strategies. Register for the webinar now to take your research management to the next level.
Sign up for our Webinar and discover how to make the most of your research.
You Might Also Like
Unlocking the Power of UX Research Platforms
UX Unleashed: Elevating Product Management through User-Centered Research
Tetra vs Maze: The Power Of Qualitative Data in User Testing
The complete UX research repository guide for 2024
After you finish a research project, your hard-earned insights shouldn’t be left to fend for themselves. That’s how they get lost in inboxes, folders, or worse — the trash.
You need a home for your findings, one that's as easy to organize today as it is to search at a moment’s notice years later.
You need a UX research repository.
Without a central repository, your research efforts can become inefficient and even wasteful. Research lives scattered across multiple tools without a consistent format for collecting, synthesizing, and analyzing data. This makes it difficult to share what you learn to inform your organization’s key product decisions.
In this guide, we’ll cover all things research repositories – from definitions, benefits, and tools, to tips for building a healthy repository that enables your company to successfully conduct, organize, store, and access research.
Let’s get started.
What is a UX research repository?
A UX research repository — "repo" for short — is a central place to store, organize, and share an organization’s research artifacts and insights.
Think of it as a digital library dedicated to your company’s research knowledge and data.
Today, most research repositories are cloud-based. Content found in a repository typically falls into one of two broad categories:
- Input used in conducting UX research — information for planning and undertaking research.
- Output derived from conducting UX research, which may include the study’s findings and reports.
At the organizational level, the ideal repository should promote and advance UX research awareness by welcoming participation from leadership, product owners, and other cross-functional stakeholders. It should also encourage operationally-sound habits and practices for greater productivity at every stage of the research process, from planning through synthesis .
Benefits of UX research repositories
In recent years, research repositories have grown in popularity due to the variety of benefits they offer to researchers and their organizations. These benefits include:
Centralizing research data
One of the main benefits of having a UX research repository is that it provides a secure, centralized location to store and organize research data. This makes it easy for information to be quickly accessed and retrieved as needed, saving time and resources.
By storing all user research data in a single place, teams can avoid the costs of redundant work and even use existing insights to augment new research.
Additionally, a centralized UX research repository can help teams identify research gaps and areas to study in the future based on the needs of their organizations. For easy retrieval and use, it’s important to develop a repeatable system for tagging research artifacts and logging metadata. This ensures information is discoverable for everyone with access.
Ensuring consistency
Back to the two content types of research repositories.
Input should include UX research methods and methodologies , protocols, and other standard approaches that help guarantee the consistency and accuracy of findings and insights gained from the research you conduct.
Consistency is vital. It ensures that your research isn’t arbitrary or subjective, and can be independently replicated by other people in the organization or elsewhere using the same or similar methods.
The same goes for output. Whether it’s a written insight report or a collection of clipped video highlights from a user interview recording, each should have its own standard format and conventions.
Enhancing decision-making
The importance of data-driven (or data-informed, as some prefer to say) decision-making can’t be overstated. After all, that’s really what having a healthy repository is all about.
A centralized UX research repository is a valuable asset for product, design, and development teams because it allows them to store and access powerful data and insights. This enables product managers, designers, and development teams to better understand user behavior, challenges, preferences, and expectations — and ultimately, make user-centric decisions to build better products.
Streamlining the research process
Planning for future research. Taking notes during research. Transcribing interviews. Analyzing raw data. Identifying key highlights. Generating actionable insights. Preparing rich, engaging presentations and reports.
Depending on how you work, each of these research activities may involve several tools. That’s a lot of jumping around and context switching, which isn’t great for productivity.
If you’re not careful, your research toolstack can grow too fast and too big, making it difficult to manage.
But, finding the right repository for your situation will help you streamline your research processes. This can significantly reduce the need for juggling multiple tools, save valuable time, and improve the quality of results. It also sets you up with the systems you need to scale as your organization — and its demand for research — grows.
Using a repository to streamline processes ensures insights can easily be traced back to the raw data they came from. It’s your organization’s source of truth for UX research.
Read: Scaling research that rocks 🤟(or at least isn’t rubbish) with Kate Towsey
Keeping all feedback in a single location
Conducting research isn’t the only way to uncover insights. Incoming user feedback is also incredibly valuable. That’s why researchers often include various feedback sources in their research repositories.
Feedback can originate from diverse channels such as public reviews on sites like G2 and TrustPilot, sales conversations, and customer support tickets.
Leveraging information from various sources helps keep a healthy mix of positive and negative feedback always coming in. Not only does this widen research perspectives for better decision-making, but it also strengthens the quality of research by using all available data and potentially minimizes the need or scope for new studies.
Greater collaboration
Having a repository makes research a team sport by facilitating collaboration across the entire organization, both in person and remotely. With a central repository, research findings can be shared and discussed collectively, which encourages cross-functional collaboration.
This democratization of research insights boosts transparency while ensuring that teams are aligned and working towards achieving common goals. It helps prevent the duplication of research efforts since team members can easily see what others have and haven’t done.
A repository can also provide a framework that empowers non-researchers (e.g., product managers) to independently carry out safe, effective user research without having to depend exclusively on research staff.
Build & maintain a participant panel
All this talk about storing research. But what about your participants and their data?
While a participant database may not be the first thing that comes to mind when thinking about establishing a UX research repository, it should factor into your decision-making process.
A healthy participant panel helps researchers keep a pulse on participant interest, activity, and engagement. You can use it to filter and find candidates with the right attributes for your study. You can see who has participated in past research and provide insight into the recruitment of participants for upcoming research. It can also help prevent over-contacting anyone (because the last thing you want to do is annoy your panel).
These are all reasons why it makes sense to integrate your panel with your repository if possible. Keeping participant data and research data close makes for tighter execution at every step.
Read: The complete guide to panel management for 2023
Democratize research access
A research repository makes democratizing research in your organization possible. Access may not necessarily be reserved for only those in the research department (or R&D) but may also be granted to other teams, stakeholders, or everyone in the organization.
From product managers and designers to marketers and sales reps, access to an organization’s repository lowers the barrier to entry for getting involved in research, simply by exploring what others are working on.
That said, adequate access monitoring and control measures must be put in place, and training should be offered to those who are new to using a research repository.
How to build an effective UX research repository for your team
Before you pick your repository tool, it’s important to evaluate the other tools and processes your organization currently uses. The road to launching an effective research repository can be roughly broken down into the following five steps:
1. Set strategic goals
A common error when trying to find the best UX research repository for your needs is to dive straight into the search for tools and try to compare them. (It’s why we haven’t so much as mentioned a single option so far in this guide.) Like any type of software, comparing repositories is a difficult task if you don’t have a clear understanding of what to look for.
First things first: seek support and input from your team and stakeholders early on.
It might help to conduct stakeholder interviews at this stage to ensure the collaboration and engagement with your future repository. Involving stakeholders can help you see things you might have missed and increase the likelihood of smooth operations once the repository is adopted.
Now, it’s time to define your goals for the ideal UX research repository in your organization. What do you intend to achieve (describe the best-case scenario in detail)? How will building a repository impact you as a researcher, as well as your stakeholders and the business as a whole? Does this decision to build a repository align with larger business objectives?
Consider developing a mind map of what research looks like for your team, or even a want a journey map for the entire research process, outlining what it looks from start to finish.Setting strategic goals for your repository will help your team to better understand its functions and benefits and help you maximize its adoption and impact.
2. Identify your team's requirements
Once you’ve set strategic goals for your UX research repository, the next step is establish your research team's requirements. This may require you to conduct a gap analysis.
The first thing to consider will be the repository tool itself and how its features align with your strategic goals. In most situations, it makes sense to place an emphasis on data security, accessibility settings for team members and stakeholders, user-friendliness, and ease of sharing research findings. Be as thorough as possible.
You’ll also need to consider potential workflow changes given the habits of the employees involved. The bigger your company is, the more sensitive this will be. What changes will have to be made to your current processes? What new tasks will need to be planned for? Which procedures will need modification, and which ones will be scrapped? In particular, think through challenges faced by the product managers as they hold a great deal of operational responsibility.
With these considerations in mind, draw up a rough list of potential repository tool candidates that match your goals.
3. Do your due diligence on repository tools
In a sea of tools, where do you even start?
Likely with a Google search — but analyzing the top results one by one can get confusing fast. Using a software review comparison site like Capterra or G2 will likely be more effective in making your shortlist of tools that meet your requirements.
Better yet, ask fellow researchers you know for recommendations directly or post online where your peers hang out, such as the ResearchOps Slack Community .
Once you’ve developed your shortlist of tools, drill deeper. Depending on the size of your team and budget, pricing may be the first thing you check or the last. Either way, get a ballpark idea of how much you can expect to spend (and perhaps be wary if a company doesn’t make their pricing publicly available). Take a look at help centers to see how easy it is to find answers to problems and get in touch with support. Check out the blog – does this company put out educational content you might actually read? Do you have a library of helpful how-to videos or templates? And what about their social presence — do they seem to have an engaged community of evangelists or are their accounts littered with complaints from frustrated customers.
During your due diligence, you and your team will hopefully be able to weed out the pretenders and narrow your list down to the true contenders. From there, it’s time to take your new tool(s) for a spin.
4. Demo & trial your best tooling options
Most UX research repository tools offer two ways to get started: immediately with a trial (often free) in just a few clicks or in the next few days by scheduling a demo with the sales team. (If you’re shopping for an enterprise plan for a larger team, it’ll be the demo.)
Then, it’s time for more due diligence. If possible, work with your team to trial and/or demo multiple tools at once. You’ll want to evaluate everything from ease of setup and onboarding to actually organizing and storing your research. It’s also important to get a feel for the company representative(s) who will be managing your account. Do they inspire confidence or concern? Are they invested in achieving your team’s goals or just here to check the boxes?
Not all trial and demo processes will look the same. Make sure to take copious notes throughout, as these will come in handy later if you need to build a business case to present to your procurement team.
5. Create an onboarding plan
Your due diligence is done. Your team has collectively determined a winner (after duking it out in a spirited debate over two final options, of course). You’ve even made your way through procurement and legal with any major issues. Now, it’s time to onboard your new UX research repository.
By now, you should already have a solid idea of what to expect from your trial run and product demos. Next, you need to delegate implementation to one or more capable individuals. Key roles here include purchasing the the repository product and managing billing, defining the repository’s data structures, and granting access to users. If you're leaving an old tool for your new tool, that also means gearing up for a repository migration of all existing research artifacts .
You’ll also want put together an onboarding plan for all involved stakeholders and a presentation to share with the company as a whole.
Adopting a research repository can be a gradual process that takes time and requires an effective implementation plan to ensure success. It won’t happen overnight.
Consider ranking such goals and focusing on those you intend to achieve earlier to avoid exerting too much pressure on yourself and your team.
Read: The 5 Cs of a successful research repository with Julian Della Mattia
8 of the best UX research repository tools
I know what you’re thinking.
Finally, the millionth edition of the the “# Best [ Software ] Tools in 2023” I’ve been waiting for! Surely there won’t be any bias and the company writing this guide won’t rank itself #1 above all its competitors.
You’ve made it this far and we’ve covered a lot. It’s only right we mix in a joke.
On a serious note, there’s no one-size-fits-all repository tool. (And if anyone tells you that, run.) So in the interest of transparency, we’ve compiled 8 of the top research repository tools on the market. They’re in no particular order, but Great Question is first because, well, it’s our website.
Great Question
At Great Question, we’re building the home of research user-centric teams, like Canva , Drift , and Brex to name a few. A cornerstone of this is our repository. Think of the Great Question Research Repository as an insights hub where you can:
- Capture, store, and tag all of your research. Never forget to hit record again with automatic interview recordings. Get free transcriptions that you can easily search. Organize everything with AI-suggested tags specific to your study or used globally across your team’s whole account. Upload or import external recordings in bulk for free transcription any time. Integrations include Zoom, Google Met, and Microsoft Teams.
- Analyze research data and create artifacts. Select interview transcript text to create instant video highlights of your key moments, then combine multiple highlights into a single highlight reel for maximum impact. You can also embed highlights and reels in your written insight reports.
- Share and collaborate with your team. Copy and paste a link to share any highlight, reel, or insight with your team wherever they work — even if they don’t have a Great Question account. Integrations include Figma, Zapier, and Slack, which allows you to send automatic notifications to your team’s channel when an interview is scheduled, survey is completed, and other research events occur.
- Discover and learn from past research. Search the repository using keywords or custom filters, and view research artifacts in grid, table, or kanban layouts. Quickly find what you’re looking for to prevent duplicate work or augment new research.
- Protect data with enterprise-grade security. Great Question is SOC-2, GDPR, and HIPAA compliant , and meets enterprise security requirements through regular penetration testing. Your data is safe with us.
We’re also hard at work building smart, ethical ways to leverage AI for UX research . This means helping researches save time on tedious tasks so they can focus more on more important, impactful work. Think AI-suggested interview summaries, survey questions, highlights, titles, and tags.
What makes Great Question different from other tools is that it’s much more than just a repository. With our all-in-one platform, you can:
- Manage a panel of your own users via CRM integration or list upload, or build a panel of non-users through our third-party integrations with Respondent and Prolific .
- Sync your work calendar with your research calendar to prevent conflicts and streamline scheduling with continuous invites and availability .
- Personalize participant recruitment with branded emails and landing pages, send automatic reminders, and prevent over contacting by setting guardrails.
- Run your favorite research methods, like user interviews , focus groups , surveys , unmoderated studies, and more. (Coming soon: prototype testing , tree testing, and card sorting .)
- Set global incentives for research participants with 1,000+ options in 200+ countries and automatically distribute upon study completion.
If you’re ready to take our repository for a spin (or interested in learning more about some of the features listed above), book a demo here to get started.
Founded in 2017, Dovetail is a popular research repository that enables users to generate research reports in a matter of minutes. This cloud-based customer knowledge software assists product, design, and development teams with user research and collaboration. Notable features include full-text search, usability testing, pattern recognition, file sharing, tagging, analytics, and graphical reporting.
Through the Dovetail platform, administrators can store user research data in a unified location, develop procedures for customer interviews, embed videos, images, and recordings in notes, as well as capture demographic and qualitative data. Dovetail also allows teams to analyze data, including survey responses, transcripts, and notes; create a standard set of tags for different projects; leverage natural language processing (NLP) for sentiment analysis; and explore metrics on graphs and charts.
Dovetail helps managers boost collaboration between user experience designers, product teams, and other stakeholders, in addition to providing role-based permissions to users, maintaining project data, and storing billing information for a multiplicity of customers. Team members can search for tags, notes, or insights across various projects as well as export data in CSV format.
Grain is a UX research repository that helps researchers collect and organize user interviews as well as create and share research insights and findings with visually appealing stories. During these user interviews, Grain can record, tag, transcribe, and organize your qualitative data. It also allows users to import their pre-recorded interviews from Zoom Cloud or manually upload them.
You can add your team members, stakeholders, and collaborators to your workspace so that they can access all your research data at any time. As soon as you’ve recorded your interview in Grain, you can slice and dice your data in a variety of ways to make sharing insights easy. Selecting the text in the transcript will enable you to clip and share important moments in a user interview. You can also create an engaging story by combining insights obtained from multiple interviews.
Copy and share the Grain AI summary with one click. Also, share insights and key moments with other teams by copying and pasting to embed Grain videos in communication software such as Slack and collaboration tools such as Notion or Miro. Grain is equipped with a native integration capability that makes it possible for you to send research insights directly to your product board.
Userbit is a tool that not only enables you to collect and store data from user interviews (with highlights, transcripts, and tags) but also includes a suite of features to help you transform data into meaningful insights.
Easily convert your transcripts to visual word clouds or affinity diagrams with Userbit's visualizations. Userbit offers a great way to quickly spot patterns and relationships in your data in order to start generating insights. Another valuable Userbit feature is the capacity to develop user personas directly from research data, allowing you to save a lot of time since it eliminates the need to manually create personas from scratch. With Userbit, you’ll have a mental picture of your users based on how they think and behave. This can be very helpful when attempting to design an intuitive user experience.
Userbit ensures easy sharing of findings with your team members and stakeholders, thus enabling the whole team to collaborate effectively so as to develop the ideal design process and user path.
Condens is a tool that can help you structure and organize your user research data effectively. With Condens, you can create a UX research repository that's both easy to use and well-organized. Condens is designed for anyone: researchers, product managers, designers, and those with little or no technical background.
One distinguishing feature of Condens is its pleasant visual interface, which allows you to view all your data at a glance. You can quickly filter and search for particular items, making it easy to locate what you're looking for, even when faced with a huge amount of data. The AI-assisted transcription feature can speedily transcribe user interviews to ensure prompt data analysis.
Condens boasts a broad range of integrations that include the capacity to easily import data from Google Sheets, Excel, and other research repositories. One advantage of this is that you can start using Condens without having to worry about transferring your data manually. So if easy onboarding and an appealing visual interface are your top priorities, Condens checks all the boxes.
Acquired by UserZoom in 2021 which later merged with UserTesting in 2022, EnjoyHQ is a cloud-based repository that helps UX and product teams learn faster from customers by streamlining the customer research process. EnjoyHQ facilitates the easy centralization, organization, and sharing of all customer insights and data in one location. It has the components needed to build an effective research system that scales.
EnjoyHQ integrates with popular communication and collaboration tools, providing the ability to gather all your data together in seconds. Third-party platforms that seamlessly integrate with EnjoyHQ include Google Docs, Zendesk, Jira Service Desk, Drift, AskNicely, Dropbox, Trello, Trustpilot, and more. Organize all your data in one place, accelerate your analysis process, and easily share insights with team members and stakeholders through EnjoyHQ.
Key features include a collaborative workspace, user management, customer segmentation, sentiment analysis, and app review translations. You can categorize data through tags, metadata, and highlights and also develop a taxonomy to classify research findings for analysis. Managers can prepare summaries and reports as well as monitor audience engagement with respect to the displayed insights. Additionally, presenters can save reports in graphical formats and use links to share them with team members.
Aurelius is a repository that was built by UX researchers for UX researchers. It’s a balanced blend between cost-effectiveness and a suite of features to collect, organize, and synthesize research data. Aurelius helps you analyze data and quickly turn it into valuable insights. Its lean features ensure that you pay for only what you need and nothing else. The Aurelius magic uploader enables you to easily upload your data into the program. Use the Aurelius-Zapier integration or the Aurelius-Zoom integration to import spreadsheets, audio, video, notes, and other file types.
The powerful global tagging feature can be used to tag notes, key insights, and recommendations. AI-powered intelligent keyword analysis helps you identify patterns even in large datasets. The universal search feature will help you quickly locate old research reports, notes, and other data. Add recommendations to each key insight, and Aurelius will automatically generate an editable report you can share with other users.
Aurelius can serve as an extension of your daily workflow in terms of promoting collaboration, encouraging independent research, and helping you obtain research insights that can drive stakeholder action.
Looppanel is a newer repository founded in 2021 with the goal of enabling product and design teams around the world to build products their users love. This AI-powered research assistant streamlines user research by managing everything from initiating calls to creating the perfect user interview templates, recording and transcribing sessions, and assisting teams in discovering and sharing insights faster
Some of its most popular features include taking time-stamped notes during a user conversation and sharing video clippings from a call with a single click. With Looppanel, teams can analyze and share their findings from Zoom-based user interviews in minutes and centralize research data in one place. It offers highly accurate transcripts across multiple languages, allows users to collaborate with team members for free, and lets them share reports and summaries via a link.
Other general tools that can be used as research repositories
Aside from these core UX research repositories, there are other general tools that can be adapted to get the job of a repository done. Here are a few of them:
Notion is a powerful, versatile tool you can do a lot with, from research documentation to project management. It's easy to use, incredibly flexible, and has a tidy interface, making it great software for housing user research data. Notion enables you to create custom databases, which is great for organizing your data. You can also attach rich media such as video, audio recordings, and images to your databases.
Like most modern apps, Notion provides a wide range of integrations. For instance, you can easily import data from other programs such as Google Sheets and Excel. This can be useful if you wish to consolidate all your user research data in a single location. Furthermore, extensions such as Repo can help you transform Notion into a dedicated UX research repository with features like highlighting and tagging. Enrich your research data in Notion by adding videos and key moments from your interviews.
Notion is a potential option for researchers, product managers, and designers looking for a versatile tool that can serve a variety of purposes.
Though Jira is mainly a project management tool popular for software development teams, it can also serve as a storage medium for UX research data and projects. Jira boasts a variety of features that make it suitable for user research. For instance, it can be used to track interviews, facilitate user testing sessions, undertake other user research tasks, and create custom reports. Jira also allows you to create a dedicated research project, making it easy for your team to keep all research data in one place. You can use it to follow the progress of user research and identify areas where improvements are required.
The ability to add attachments to Jira tickets makes storing and sharing user research data, such as screenshots and interview recordings, easy. Jira can be somewhat overwhelming if you are new to project management tools, but it is nonetheless a good tool to store your user research data.
Airtable is another database tool capable of serving diverse purposes, including UX research. It comes with a user research template that helps you avoid the stress involved in having to set up a database. A combination of that user research template and another feature — the user feedback template — can help you organize your user research data and feedback in one location.
Easily add attachments such as images, audio files, and videos to enrich your user research data. Organize and find your user research data using the views feature to filter and sort your data or create custom formulas to calculate things like the net promoter score . You can also visualize your data through bar graphs and other means.
Confluence is a shared workspace developed by Atlassian to create and manage all your work. Confluence makes it easy to organize and find the information you need. This is one of the main reasons it can be adapted into an effective repository. You can group related pages in a dedicated space for your work, team, or cross-functional projects. Depending on permissions, access to a Confluence workspace can be reserved for only you or other members of your company. Page trees create a hierarchical list of pages within a workspace, highlight topics on parent pages, and help ensure you and your team work tidily.
To find something, just do a quick search of existing pages. You can even locate comments posted to a page by others. Visual improvements for UX concept documentation not only make sense but are simple in Confluence. It facilitates the easy integration of a variety of add-ons through which you can quickly attach visual information such as image maps, flow charts, and other diagrams via your editor.
Concept visualization, prototypes, and spec files are all integral components of UX design and should form part of your UX documentation as well. Confluence provides you with the opportunity to visually preview a wide range of file types that you can utilize to bolster your written research documents.
Final thoughts
To build a healthy, mature UX research practice in any organization, you need a repository. But a research repository without a clear strategy won't last long.
That’s why it’s essential to align with your team on strategic goals for your repository, perform due diligence on your tooling options, and run collaborative onboarding to maximize adoption and impact.
With this guide in your back pocket, you’re well on your way to building an effective repository that makes research vital to your organization.
Enjoy this article? Subscribe to the Great Question newsletter.
Jack Wolstenholm
Jack is the Content Marketing Lead at Great Question, the end-to-end UX research platform for customer-centric teams. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.
Similar posts
Jack Holmes
Evaluating your mvp with mixed methods research.
Johanna Jagow
6 overlooked ux research mistakes many make (but you don't have to).
What's happening to UX research? Revisiting one year later
See the all-in-one ux research platform in action, subscribe to our newsletter.
What is a research repository, and why do you need one?
Last updated
31 January 2024
Reviewed by
Miroslav Damyanov
Short on time? Get an AI generated summary of this article instead
Without one organized source of truth, research can be left in silos, making it incomplete, redundant, and useless when it comes to gaining actionable insights.
A research repository can act as one cohesive place where teams can collate research in meaningful ways. This helps streamline the research process and ensures the insights gathered make a real difference.
- What is a research repository?
A research repository acts as a centralized database where information is gathered, stored, analyzed, and archived in one organized space.
In this single source of truth, raw data, documents, reports, observations, and insights can be viewed, managed, and analyzed. This allows teams to organize raw data into themes, gather actionable insights , and share those insights with key stakeholders.
Ultimately, the research repository can make the research you gain much more valuable to the wider organization.
- Why do you need a research repository?
Information gathered through the research process can be disparate, challenging to organize, and difficult to obtain actionable insights from.
Some of the most common challenges researchers face include the following:
Information being collected in silos
No single source of truth
Research being conducted multiple times unnecessarily
No seamless way to share research with the wider team
Reports get lost and go unread
Without a way to store information effectively, it can become disparate and inconclusive, lacking utility. This can lead to research being completed by different teams without new insights being gathered.
A research repository can streamline the information gathered to address those key issues, improve processes, and boost efficiency. Among other things, an effective research repository can:
Optimize processes: it can ensure the process of storing, searching, and sharing information is streamlined and optimized across teams.
Minimize redundant research: when all information is stored in one accessible place for all relevant team members, the chances of research being repeated are significantly reduced.
Boost insights: having one source of truth boosts the chances of being able to properly analyze all the research that has been conducted and draw actionable insights from it.
Provide comprehensive data: there’s less risk of gaps in the data when it can be easily viewed and understood. The overall research is also likely to be more comprehensive.
Increase collaboration: given that information can be more easily shared and understood, there’s a higher likelihood of better collaboration and positive actions across the business.
- What to include in a research repository
Including the right things in your research repository from the start can help ensure that it provides maximum benefit for your team.
Here are some of the things that should be included in a research repository:
An overall structure
There are many ways to organize the data you collect. To organize it in a way that’s valuable for your organization, you’ll need an overall structure that aligns with your goals.
You might wish to organize projects by research type, project, department, or when the research was completed. This will help you better understand the research you’re looking at and find it quickly.
Including information about the research—such as authors, titles, keywords, a description, and dates—can make searching through raw data much faster and make the organization process more efficient.
All key data and information
It’s essential to include all of the key data you’ve gathered in the repository, including supplementary materials. This prevents information gaps, and stakeholders can easily stay informed. You’ll need to include the following information, if relevant:
Research and journey maps
Tools and templates (such as discussion guides, email invitations, consent forms, and participant tracking)
Raw data and artifacts (such as videos, CSV files, and transcripts)
Research findings and insights in various formats (including reports, desks, maps, images, and tables)
Version control
It’s important to use a system that has version control. This ensures the changes (including updates and edits) made by various team members can be viewed and reversed if needed.
- What makes a good research repository?
The following key elements make up a good research repository that’s useful for your team:
Access: all key stakeholders should be able to access the repository to ensure there’s an effective flow of information.
Actionable insights: a well-organized research repository should help you get from raw data to actionable insights faster.
Effective searchability : searching through large amounts of research can be very time-consuming. To save time, maximize search and discoverability by clearly labeling and indexing information.
Accuracy: the research in the repository must be accurately completed and organized so that it can be acted on with confidence.
Security: when dealing with data, it’s also important to consider security regulations. For example, any personally identifiable information (PII) must be protected. Depending on the information you gather, you may need password protection, encryption, and access control so that only those who need to read the information can access it.
- How to create a research repository
Getting started with a research repository doesn’t have to be convoluted or complicated. Taking time at the beginning to set up the repository in an organized way can help keep processes simple further down the line.
The following six steps should simplify the process:
1. Define your goals
Before diving in, consider your organization’s goals. All research should align with these business goals, and they can help inform the repository.
As an example, your goal may be to deeply understand your customers and provide a better customer experience . Setting out this goal will help you decide what information should be collated into your research repository and how it should be organized for maximum benefit.
2. Choose a platform
When choosing a platform, consider the following:
Will it offer a single source of truth?
Is it simple to use
Is it relevant to your project?
Does it align with your business’s goals?
3. Choose an organizational method
To ensure you’ll be able to easily search for the documents, studies, and data you need, choose an organizational method that will speed up this process.
Choosing whether to organize your data by project, date, research type, or customer segment will make a big difference later on.
4. Upload all materials
Once you have chosen the platform and organization method, it’s time to upload all the research materials you have gathered. This also means including supplementary materials and any other information that will provide a clear picture of your customers.
Keep in mind that the repository is a single source of truth. All materials that relate to the project at hand should be included.
5. Tag or label materials
Adding metadata to your materials will help ensure you can easily search for the information you need. While this process can take time (and can be tempting to skip), it will pay off in the long run.
The right labeling will help all team members access the materials they need. It will also prevent redundant research, which wastes valuable time and money.
6. Share insights
For research to be impactful, you’ll need to gather actionable insights. It’s simpler to spot trends, see themes, and recognize patterns when using a repository. These insights can be shared with key stakeholders for data-driven decision-making and positive action within the organization.
- Different types of research repositories
There are many different types of research repositories used across organizations. Here are some of them:
Data repositories: these are used to store large datasets to help organizations deeply understand their customers and other information.
Project repositories: data and information related to a specific project may be stored in a project-specific repository. This can help users understand what is and isn’t related to a project.
Government repositories: research funded by governments or public resources may be stored in government repositories. This data is often publicly available to promote transparent information sharing.
Thesis repositories: academic repositories can store information relevant to theses. This allows the information to be made available to the general public.
Institutional repositories: some organizations and institutions, such as universities, hospitals, and other companies, have repositories to store all relevant information related to the organization.
- Build your research repository in Dovetail
With Dovetail, building an insights hub is simple. It functions as a single source of truth where research can be gathered, stored, and analyzed in a streamlined way.
1. Get started with Dovetail
Dovetail is a scalable platform that helps your team easily share the insights you gather for positive actions across the business.
2. Assign a project lead
It’s helpful to have a clear project lead to create the repository. This makes it clear who is responsible and avoids duplication.
3. Create a project
To keep track of data, simply create a project. This is where you’ll upload all the necessary information.
You can create projects based on customer segments, specific products, research methods , or when the research was conducted. The project breakdown will relate back to your overall goals and mission.
4. Upload data and information
Now, you’ll need to upload all of the necessary materials. These might include data from customer interviews , sales calls, product feedback , usability testing , and more. You can also upload supplementary information.
5. Create a taxonomy
Create a taxonomy to organize the data effectively by ensuring that each piece of information will be tagged and organized.
When creating a taxonomy, consider your goals and how they relate to your customers. Ensure those tags are relevant and helpful.
6. Tag key themes
Once the taxonomy is created, tag each piece of information to ensure you can easily filter data, group themes, and spot trends and patterns.
With Dovetail, automatic clustering helps quickly sort through large amounts of information to uncover themes and highlight patterns. Sentiment analysis can also help you track positive and negative themes over time.
7. Share insights
With Dovetail, it’s simple to organize data by themes to uncover patterns and share impactful insights. You can share these insights with the wider team and key stakeholders, who can use them to make customer-informed decisions across the organization.
8. Use Dovetail as a source of truth
Use your Dovetail repository as a source of truth for new and historic data to keep data and information in one streamlined and efficient place. This will help you better understand your customers and, ultimately, deliver a better experience for them.
Should you be using a customer insights hub?
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Editor’s picks
Last updated: 30 January 2024
Last updated: 11 January 2024
Last updated: 17 January 2024
Last updated: 12 December 2023
Last updated: 30 April 2024
Last updated: 4 July 2024
Last updated: 12 October 2023
Last updated: 5 March 2024
Last updated: 6 March 2024
Last updated: 31 January 2024
Last updated: 23 January 2024
Last updated: 13 May 2024
Last updated: 20 December 2023
Latest articles
Related topics, decide what to build next, log in or sign up.
Get started for free
Building A Research Repository? Here’s What You Need to Know.
Posted by Joseph Friedman on Feb 9, 2023
You’re a UX Researcher. It’s a Tuesday morning and you have some agile ceremony to go to. Your product owner has changed hands a few times in the three years you’ve been in research at this organization, and they bring up a request for research that they’re interested in doing.
You pause because the request sounds slightly familiar. Didn’t you do this before? Don’t you have some personas that match this discovery need…they might be outdated, but couldn’t this help with steering? Oh, no, wait, maybe that was your peer in the Insights & Analytics department that ran this UX study last year?
This happens a lot on UX teams. This design, this test, this ask, this vibe, these trends are sometimes cyclical. How do we, as a team, help to steer our product towards committing to the new research that matters, especially when there’s existing research that might support some of the questions we have? How might we approach existing research from other departments, other teams, and other products with a lens of how it might impact our own peoples’ needs? Or, even deeper still, how might we move beyond what is this project and move into the realm of what is this product , or even who is this person ?
Without getting too existential, this is what’s known as UX Ops work and there is a clearly identified gap to pair with an opportunity: a Research Repository. One place - searchable, taggable, and trackable - across your entire organization that houses whatever research you’ve accomplished. Survey work from Marketing? Voice of the Customer from Sales wins and losses? Usability testing on the new component update? All of that beautiful customer journey vision work you presented to your SVP last Q2? It’s all there.
Whether you’re new to the idea of a research repository, or you’re just coming in to nod aggressively in this UX-positive echo-chamber, here are some things to keep in mind to make sure your research repository sees success.
6 Considerations for Building Your Research Repository
Always work towards your goal of who this repository is for.
Some people set up a repository for researchers themselves. Or, they extend it to the entire UX team, allowing UX designers the freedom to easily see previous insights or have gut-checks on some of their work without having to engage in a study. Others set up repositories for their entire company. Product people, SVPs, customer success, sales, you name it can all access whatever is uploaded and pull from the knowledge you’ve built. Whomever you build it for, make sure you identify this and set up clear objectives to drive the need and use.
Without a purpose, the repository becomes a chore - people don’t visit it, and contributors half-heartedly go through the motions without seeing any real benefit. Understanding your audience, how you want to give others access, and how they might glean insights from existing work or even craft their own cross-research insights, optimizes usefulness, navigation, and storage.
Set yourself up for future flexibility
When getting started with what a research repository could be, the first step is to identify how your research could be sorted. Ask yourself: what are your taxonomies, tags, or filters, and how can you ensure that they’re not set in stone?
Some teams start simply, knowing they’re open to grow as they need it - Title of work, Department/Team, Researcher Name, Some relevant tags, and the deliverables themselves (artifacts from prep, fielding, and analysis). Other teams use it as an excuse to introduce some needed consistency in the process. This is an opportunity to standardize how everyone is storing research - if there are user interviews involved, for example, make sure to leave a field for 2-3 relevant video clips. Or, if a project is completed, each entry might require a very brief summary of insights for others to skim.
Once things start to be entered into the system, you have the ability to grow and change. Don’t worry about nailing all of it down immediately. The power of your repository is in its flexibility. As you continue to add research and tag information, you’ll find additional ways to sort, filter, or even add new connections and taxonomies that you may have not had before.
Consider permissions and data management
Once you have your audience and basic structure in mind, consider permissions and data management. As you continue to share and socialize this work, there’s an additional opportunity to show off the rigor and craft of your organization’s research ethics.
Most teams already obfuscate PII, but sometimes that isn’t enough. How are you ensuring that each piece of research uploading also stays in line with any GDPR requirements , and that you can easily remove any specific participant data based on request? Similarly, what other professional privacy ethics does your team want to establish and champion through this kind of solution?
In addition to participant privacy, it’s also important to think through product privacy. Not all organizations are comfortable with every department sharing the intricacies of sometimes sensitive or confidential product research. Giving a template or guidebook to other teams so that they can contribute pieces of their research, even if they can’t be specific about the product or project, is a huge help in empowering your repository.
Build a living and evolving product
If you have a repository currently set up, consider the ways in which your research repository needs to be organic and living, alive and active, and connected to other teams, departments, or even across years of previous research. A successful repository requires maintenance. Links will break, research will need to be scrubbed retroactively, folks will move on to other roles, new people and teams will need to be engaged, and even you might eventually take on a different opportunity.
Some organizations have a curator - this doesn’t need to be one person’s entire role, nor does it even need to be a role that just one person takes on. Some organizations even use this need to establish a cross-department committee of people interested in the care of an organizational repository.
Socialize with pride
So you’ve got a curator or a small committee. Along with being alive, a successful repository also requires discoverability. How are you consistently reaching out to people? Who might be interested in this work, but needs to be kept engaged? How are you playing the role of librarian if someone reaches out with a request?
Jennifer Bohmbach, a UX Researcher here at AnswerLab who chatted with me about repositories, said it best: “ There will always be different pockets of things going on that people can’t see, and so how do you make sure there’s a way that people can go in and do research on the research? How can people inform themselves ahead of time of what’s been found instead of going out and repeating research?” The more excited you are to socialize research insights and the work you’re doing to connect the dots or to curate information, the more likely others will be to share in the benefits of your work.
Finally, just simply start simple
A research repository can add value no matter what stage of maturity your UX Ops practice is in. Not everyone is able to have an internal team build a custom product, and that’s OK. Shopping around for some tool might be useful down the road, but many teams need to show the value to get buy-in first. The best way to garner support for a transparent storage solution is to start storing your team’s work in a transparent way - and building the networks and partnerships that can encourage other teams to do the same.
Case Study: Utilizing an Existing Tool to Get Started
One of our clients, a large and established organization, identified the need for a research repository after running successful UX research across a few departments both internally and with the help of AnswerLab’s services. Because of the costs and time associated with recruiting and logistics, leaders wanted a better way to track and glean information from work they’d already done.
We were able to set up something simple for them using Confluence, an internal tool they were already using for other use cases. The main page of the repository was just a table. Searchable? Sure, as long as you knew how to CTRL+F. Taggable? Absolutely, via some text in a column. Uploadable? Sometimes, but you can always provide a link if a file is too large. What was most helpful, in this first iteration, is that it was one place that easily housed everything they might want to reference in the future.
Case Study: Building a Repository from Scratch
Another client of ours had the initiative and resources to develop their own custom repository. For this client, they accomplish so much research across so many different products and teams, that sometimes it was difficult to see how interconnected insights could be. While research on different teams had different overall objectives, there were common themes and consistent insights that could be gleaned no matter which team took on the work. Using other teams’ findings as a starting point, broad research questions could be guided and vague research questions could be made more specific before digging into new work.
This solution certainly didn’t house everything, but it did allow for the opportunity to standardize and socialize all types of work done. And, more importantly, it was open to the entire company to search. Where some teams were more secretive in their work and company shared drives were less accessible, in this tool, they could easily redact any confidential information and still contribute. Additionally, when getting ready to network with a UXR, some colleagues appreciated being able to reference the repository and simply look up the projects that someone’s led.
For me, whenever I consider any UX ops work, it comes out of a more existential need. A need to reflect, a need to organize, a need to operationalize - these are all efforts to reach more mature conclusions, develop a more consistent practice, and deliver a more human experience at every stage. Especially internally.
AnswerLab’s UX research experts can help you plan and execute UX research to populate an existing or future UX repository. We also offer consulting and retainer services in case you need that extra push to dig deeper, develop strategy, or build and implement a research pipeline. Get in touch with a Strategist today to start the conversation.
Joseph Friedman
Related insights.
The importance of research ops When conducting research, you can develop the perfect prototype, create a flawless resear...
We’re continuing the conversation on research with teens. Last week, we shared why you should conduct research with teen...
In an abundance of caution, AnswerLab is converting all in-person research to remote methodologies, temporarily closed o...
stay connected with AnswerLab
Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.
Research Repositories 101
July 5, 2024 2024-07-05
- Email article
- Share on LinkedIn
- Share on Twitter
As a research function scales, managing the growing research-related body of knowledge becomes a challenge. It’s common for research insights to get lost in hard-to-find reports. When this happens, research efforts are sometimes duplicated. Enter research repositories: an antidote to some of these common growing pains.
In This Article:
What is a research repository, what is included in a research repository, tools for research repositories, research-repository types, helpful features for successful adoption, 4 steps for creating a research repository.
A research repository is a central place where user-research artifacts and outputs are stored so that they can be accessed by others in the organization.
Storing user research centrally in a repository provides the following benefits:
- Insight can be quickly found because research outputs are stored in one place (rather than distributed across many platforms or team spaces).
- Teams can track findings and observations over time and across studies, helping to uncover themes that might not be identified from one study alone.
- Research efforts are not duplicated, as teams can learn from or build on research performed by others.
Creating and maintaining a research repository is often the responsibility of a ResearchOps function.
When successfully implemented, a research repository can be a helpful tool in increasing the UX maturity of an organization, because it makes insights about users accessible to many people.
Research repositories often house (or link out to) the following items:
- Research reports capture what happened and what was learned in the research study. A research report usually includes overarching themes, detailed findings, and sometimes recommendations.
- Research insights are the detailed findings acquired from each research study. While insights also appear in reports, saving them as their own entities makes them easier to see and address.
- Study materials, such as research plans and screeners, allow team members to learn how research insights were gathered and easily replicate a study method.
- Recordings, clips, and transcriptions make user data easily accessible. Summarizing and transcribing each video allows teams to search for keywords or specific information.
- Raw notes and artifacts from research sessions might be useful for future analysis and can sometimes be easier to read or process than a full transcript or video recording.
Of course, there could be other items included in your repository. There’s no hard rule on what belongs in a research repository. In some organizations, research repositories also store data coming from sources other than user-research studies — for example, customer support-emails, notes from customer-advisory groups, or market research. When choosing what to include in a repository, consider the needs of your team and repository users.
Research repositories can be built and hosted in many different tools. Choose a tool that your team (and any colleagues who need to use the repository) can easily access or use.
According to our survey, the most popular tools for research repositories across organizations were:
- Collaboration tools (such as Confluence and Sharepoint) are often already used in many organizations. Since teams and stakeholders can easily access them, they become a natural starting place for many research repositories.
- User-research tools (such as Dovetail and EnjoyHQ) are used by researchers to transcribe and tag video recordings and perform qualitative data analysis . Many of these tools have repository features, making them an obvious repository choice.
- Database tools (such as Notion and Airtable) are often used by teams that already work with databases for product management. Database tools allow for easy cataloging of research projects, deliverables, and insight.
A research repository can take many forms and is often dependent on the tool chosen for the job.
Some repositories act as glorified document libraries, where research reports and study materials are filed away in a specific folder structure. These are common when repositories are housed within collaboration platforms like Sharepoint or Confluence.
Other research repositories are searchable indexes or databases of research findings. These are common when teams pursue atomic research — where knowledge is broken down into “nuggets” or key insights.
Each type of research repository has advantages and disadvantages (as shown in the table below). The main tradeoff is insight discoverability versus effort needed to add to the repository . Folder libraries are easy to contribute to and manage, but insights are less discoverable. On the other hand, insight databases are hard to contribute to and manage but provide easy access to research insights.
Of course, a research repository could include both an insights database and a research-document library. For example, an insights database could link to a folder structure containing all the research documentation from the study where the insight originated.
Getting people to contribute and use a research repository can be challenging. Regardless of the tool and type of repository you pursue, here are five attributes that make research repositories easy to use and adopt.
Easy to Access
The tool you use for your repository should be easy to access, use, and learn by teams and stakeholders. A new tool that is unfamiliar or that is hard to learn could stop people from accessing or contributing to your repository.
Flexible Permissions
The right people should have access to the right data. For most organizations, the research repository should not be publicly accessible since research could involve proprietary designs or cause reputational damage. If the repository stores session recordings or identifiable participant data, the right people in your organization should have access to those assets to ensure that participant data is handled appropriately .
Intuitive Navigation or Tags
People should be able to easily find and discover research. If it is too difficult for stakeholders and teams to locate research, they will give up.
If your repository is a document library, folders should be labeled and organized sensibly. If you are using a database or user-research platform, then create clear and useful global tags, to help contributors label their research and people find specific research-related information.
Repository users should be able to search by specific keywords (such as user groups, products, or features) to quickly find research insight. A strong and reliable search feature is often essential.
Exportable, Shareable, and Integrated
Sharing or exporting insight from the repository is important if research is to be disseminated widely. For example, if the repository tool supports integrations with other platforms, research snippets from the repository can be easily shared to Slack or MS Teams channels.
Step 1: Gain Buy-in
People won’t adopt a research repository if they don’t understand its value. Clearly present your arguments for the repository, including what teams might gain from having one. Gaining buy-in for the initiative and tool is especially important if you need to procure budget to purchase a specialized tool. You may need to show the return on investment (ROI) .
Step 2: Do Your Research
Do research before choosing a tool or structure for your repository. Treat the process of developing a repository like building a new product. Start with some discovery and take a user-centered approach.
Some helpful questions to explore:
- How do teams currently do research? What tools do they use?
- How do teams write up or share research insights currently? What works? What doesn’t?
- What questions do stakeholders ask researchers or teams when requesting research insights?
- What counts as research? What kind of research insights need to be stored and socialized?
- Which tools do researchers or teams have access to? What tools seem familiar and are easy to adopt?
If you are procuring a new tool for your repository, your research might include evaluating available tools to learn about their capabilities, pricing models, and constraints. You can also utilize free trials and demos and perform a trial run or private beta test with a new tool to find out what works.
Step 3: Start Simple and Iterate
When creating a tagging taxonomy for your repository, start with a few broad tags rather than getting too granular and specific. This approach will ensure that there aren’t too many tags to learn or apply. The tagging taxonomy may need to change over time as more research and contributors are added to the repository. It’s easier to make iterations if you have a small set of tags.
Consider testing your proposed tagging taxonomy or navigational structure. Methods like tree testing and card sorting can uncover the best labels, tags, or folder structures.
When thinking about adding content to a new repository, start simple. Instead of migrating all research (and research types) in one go, consider importing the most common or most useful items. Use this as a test run to refine the contribution process and structure for your repository.
Step 4: Onboard and Advocate
The key to successful adoption is a plan for onboarding and change management. Don’t expect the tool to be adopted straight away. Change aversion is common with any new process, design, or tool. Teams and stakeholders may need constant reminders or encouragement to use the repository. You may also need to run training sessions to help people learn how to use it and get value out of it.
Research repositories store and organize UX research, making research insights widely available and easy to consume throughout an organization. When creating a research repository, research available tools, gain feedback from researchers and teams who would use it, and plan to iterate after launch.
Related Courses
Researchops: scaling user research.
Orchestrate and optimize research to amplify its impact
Related Topics
- Managing UX Teams Managing UX Teams
Learn More:
Maria Rosala · 4 min
Create Your Own Research-Participant Database
Kim Flaherty · 3 min
UX Researchers Reporting Structure
Kara Pernice · 3 min
Related Articles:
Supercharge UX Research by Automating Workflows and Repetitive Tasks
Kim Flaherty · 7 min
Democratize User Research in 5 Steps
Kara Pernice · 10 min
ResearchOps 101
Kate Kaplan · 8 min
Formative vs. Summative Evaluations
Alita Joyce · 5 min
You Are Not the User: The False-Consensus Effect
Raluca Budiu · 4 min
UX Stakeholder Engagement 101
Sarah Gibbons · 7 min
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Nine best practices for research software registries and repositories
Daniel garijo, hervé ménager, lorraine hwang, ana trisovic, michael hucka, thomas morrell, alice allen.
- Author information
- Article notes
- Copyright and License information
Corresponding author.
Received 2021 Sep 28; Accepted 2022 Jun 9; Collection date 2022.
This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
Scientific software registries and repositories improve software findability and research transparency, provide information for software citations, and foster preservation of computational methods in a wide range of disciplines. Registries and repositories play a critical role by supporting research reproducibility and replicability, but developing them takes effort and few guidelines are available to help prospective creators of these resources. To address this need, the FORCE11 Software Citation Implementation Working Group convened a Task Force to distill the experiences of the managers of existing resources in setting expectations for all stakeholders. In this article, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development. We believe that establishing specific policies such as those presented here will help other scientific software registries and repositories better serve their users and their disciplines.
Keywords: Best practices, Research software repository, Research software registry, Software metadata, Repository policies, Research software registry guidelines
Introduction
Research software is an essential constituent in scientific investigations ( Wilson et al., 2014 ; Momcheva & Tollerud, 2015 ; Hettrick, 2018 ; Lamprecht et al., 2020 ), as it is often used to transform and prepare data, perform novel analyses on data, automate manual processes, and visualize results reported in scientific publications ( Howison & Herbsleb, 2011 ). Research software is thus crucial for reproducibility and has been recognized by the scientific community as a research product in its own right—one that should be properly described, accessible, and credited by others ( Smith, Katz & Niemeyer, 2016 ; Chue Hong et al., 2021 ). As a result of the increasing importance of computational methods, communities such as Research Data Alliance (RDA) ( Berman & Crosas, 2020 ) ( https://www.rd-alliance.org/ ) and FORCE11 ( Bourne et al., 2012 ) ( https://www.force11.org/ ) emerged to enable collaboration and establish best practices. Numerous software services that enable open community development of and access to research source code, such as GitHub ( https://github.com/ ) and GitLab ( https://gitlab.com ), appeared and found a role in science. General-purpose repositories, such as Zenodo ( CERN & OpenAIRE, 2013 ) and FigShare ( Thelwall & Kousha, 2016 ), have expanded their scope beyond data to include software, and new repositories, such as Software Heritage ( Di Cosmo & Zacchiroli, 2017 ), have been developed specifically for software. A large number of domain-specific research software registries and repositories have emerged for different scientific disciplines to ensure dissemination and reuse among their communities ( Gentleman et al., 2004 ; Peckham, Hutton & Norris, 2013 ; Greuel & Sperber, 2014 ; Allen & Schmidt, 2015 ; Gil, Ratnakar & Garijo, 2015 ; Gil et al., 2016 ).
Research software registries are typically indexes or catalogs of software metadata, without any code stored in them; while in research software repositories , software is both indexed and stored ( Lamprecht et al., 2020 ). Both types of resource improve software discoverability and research transparency, provide information for software citations, and foster preservation of computational methods that might otherwise be lost over time, thereby supporting research reproducibility and replicability. Many provide or are integrated with other services, including indexing and archival services, that can be leveraged by librarians, digital archivists, journal editors and publishers, and researchers alike.
Transparency of the processes under which registries and repositories operate helps build trust with their user communities ( Yakel et al., 2013 ; Frank et al., 2017 ). However, many domain research software resources have been developed independently, and thus policies amongst such resources are often heterogeneous and some may be omitted. Having specific policies in place ensures that users and administrators have reference documents to help define a shared understanding of the scope, practices, and rules that govern these resources.
Though recommendations and best practices for many aspects of science have been developed, no best practices existed that addressed the operations of software registries and repositories. To address this need, a Best Practices for Software Registries Task Force was proposed in June 2018 to the FORCE11 Software Citation Implementation Working Group (SCIWG) ( https://github.com/force11/force11-sciwg ). In seeking to improve the services software resources provide, software repository maintainers came together to learn from each other and promote interoperability. Both common practices and missing practices unfolded in these exchanges. These practices led to the development of nine best practices that set expectations for both users and maintainers of the resource by defining management of its contents and allowed usages as well as clarifying positions on sensitive issues such as attribution.
In this article, we expand on our pre-print “Nine Best Practices for Research Software Registries and Repositories: A Concise Guide” ( Task Force on Best Practices for Software Registries et al., 2020 ) to describe our best practices and their development. Our guidelines are actionable, have a general purpose, and reflect the discussion of a community of more than 30 experts who handle over 14 resources (registries or repositories) across different scientific domains. Each guideline provides a rationale, suggestions, and examples based on existing repositories or registries. To reduce repetition, we refer to registries and repositories collectively as “resources.”
The remainder of the article is structured as follows. We first describe background and related efforts in “Background”, followed by the methodology we used when structuring the discussion for creating the guidelines (Methodology). We then describe the nine best practices in “Best Practices for Repositories and Registries”, followed by a discussion (Discussion). “Conclusions” concludes the article by summarizing current efforts to continue the adoption of the proposed practices. Those who contributed to the development of this article are listed in Appendix A, and links to example policies are given in Appendix B. Appendix C provides updated information about resources that have participated in crafting the best practices and an overview of their main attributes.
In the last decade, much was written about a reproducibility crisis in science ( Baker, 2016 ) stemming in large part from the lack of training in programming skills and the unavailability of computational resources used in publications ( Merali, 2010 ; Peng, 2011 ; Morin et al., 2012 ). On these grounds, national and international governments have increased their interest in releasing artifacts of publicly-funded research to the public ( Office of Science & Technology Policy, 2016 ; Directorate-General for Research & Innovation (European Commission), 2018 ; Australian Research Council, 2018 ; Chen et al., 2019 ; Ministère de l’Enseignement supérieur, de la Recherche et de l’Innovation, 2021 ), and scientists have appealed to colleagues in their field to release software to improve research transparency ( Weiner et al., 2009 ; Barnes, 2010 ; Ince, Hatton & Graham-Cumming, 2012 ) and efficiency ( Grosbol & Tody, 2010 ). Open Science initiatives such as RDA and FORCE11 have emerged as a response to these calls for greater transparency and reproducibility. Journals introduced policies encouraging (or even requiring) that data and software be openly available to others ( Editorial Staff, 2019 ; Fox et al., 2021 ). New tools have been developed to facilitate depositing research data and software in a repository ( Baruch, 2007 ; CERN & OpenAIRE, 2013 ; Di Cosmo & Zacchiroli, 2017 ; Clyburne-Sherin, Fei & Green, 2019 ; Brinckman et al., 2019 ; Trisovic et al., 2020 ) and consequently, make them citable so authors and other contributors gain recognition and credit for their work ( Soito & Hwang, 2017 ; Du et al., 2021 ).
Support for disseminating research outputs has been proposed with FAIR and FAIR4RS principles that state shared digital artifacts, such as data and software, should be Findable, Accessible, Interoperable, and Reusable ( Wilkinson et al., 2016 ; Lamprecht et al., 2020 ; Katz, Gruenpeter & Honeyman, 2021 ; Chue Hong et al., 2021 ). Conforming with the FAIR principles for published software ( Lamprecht et al., 2020 ) requires facilitating its discoverability, preferably in domain-specific resources ( Jiménez et al., 2017 ). These resources should contain machine-readable metadata to improve the discoverability (Findable) and accessibility (Accessible) of research software through search engines or from within the resource itself. Furthering interoperability in FAIR is aided through the adoption of community standards e.g ., schema.org ( Guha, Brickley & Macbeth, 2016 ) or the ability to translate from one resource to another. The CodeMeta initiative ( Jones et al., 2017 ) achieves this translation by creating a “Rosetta Stone” which maps the metadata terms used by each resource to a common schema. The CodeMeta schema ( https://codemeta.github.io/ ) is an extension of schema.org which adds ten new fields to represent software-specific metadata. To date, CodeMeta has been adopted for representing software metadata by many repositories ( https://hal.inria.fr/hal-01897934v3/codemeta ).
As the usage of computational methods continues to grow, recommendations for improving research software have been proposed ( Stodden et al., 2016 ) in many areas of science and software, as can be seen by the series of “Ten Simple Rules” articles offered by PLOS ( Dashnow, Lonsdale & Bourne, 2014 ), sites such as AstroBetter ( https://www.astrobetter.com/ ), courses to improve skills such as those offered by The Carpentries ( https://carpentries.org/ ), and attempts to measure the adoption of recognized best practices ( Serban et al., 2020 ; Trisovic et al., 2022 ). Our quest for best practices complements these efforts by providing guides to the specific needs of research software registries and repositories.
Methodology
The best practices presented in this article were developed by an international Task Force of the FORCE11 Software Citation Implementation Working Group (SCIWG). The Task Force was proposed in June 2018 by author Alice Allen, with the goal of developing a list of best practices for software registries and repositories. Working Group members and a broader group of managers of domain specific software resources formed the inaugural group. The resulting Task Force members were primarily managers and editors of resources from Europe, United States, and Australia. Due to the range in time zones, the Task Force held two meetings 7 h apart, with the expectation that, except for the meeting chair, participants would attend one of the two meetings. We generally refer to two meetings on the same day with the singular “meeting” in the discussions to follow.
The inaugural Task Force meeting (February, 2019) was attended by 18 people representing 14 different resources. Participants introduced themselves and provided some basic information about their resources, including repository name, starting year, number of records, and scope (discipline-specific or general purpose), as well as services provided by each resource ( e.g ., support of software citation, software deposits, and DOI minting). Table 1 presents an overview of the collected responses, which highlight the efforts of the Task Force chairs to bring together both discipline-specific and general purpose resources. The “Other” category indicates that the answer needed clarifying text ( e.g ., for the question “is the repository actively curated?” some repositories are not manually curated, but have validation checks). Appendix C provides additional information on the questions asked to resource managers ( Table C.1 ) and their responses ( Tables C.2 – C.4 ).
Table 1. Overview of the information shared by the 14 resources which participated in the first Task Force meeting.
Table c.1. questions asked to resource representatives., table c.2. information shared by 30 resources participating in the scicodes consortium, as of december 2021., table c.4. date of creation of the resources in the scicodes consortium, by december 2021..
During the inaugural Task Force meeting, the chair laid out the goal of the Task Force, and the group was invited to brainstorm to identify commonalities for building a list of best practices. Participants also shared challenges they had faced in running their resources and policies they had enacted to manage these resources. The result of the brainstorming and discussion was a list of ideas collected in a common document.
Starting in May 2019 and continuing through the rest of 2019, the Task Force met on the third Thursday of each month and followed an iterative process to discuss, add to, and group ideas; refine and clarify the ideas into different practices, and define the practices more precisely. It was clear from the onset that, though our resources have goals in common, they are also very diverse and would be best served by best practices that were descriptive rather than prescriptive. We reached consensus on whether a practice should be a best practice through discussion and informal voting. Each best practice was given a title and a list of questions or needs that it addressed.
Our initial plan aimed at holding two Task Force meetings on the same day each month, in order to follow a common agenda with independent discussions built upon the previous month’s meeting. However, the later meeting was often advantaged by the earlier discussion. For instance, if the early meeting developed a list of examples for one of the guidelines, the late meeting then refined and added to the list. Hence, discussions were only duplicated when needed, e.g ., where there was no consensus in the early group, and often proceeded in different directions according to the group’s expertise and interest. Though we had not anticipated this, we found that holding two meetings each month on the same day accelerated the work, as work done in the second meeting of the day generally continued rather than repeating work done in the first meeting.
The resulting consensus from the meetings produced a list of the most broadly applicable practices, which became the initial list of best practices participants drew from during a two-day workshop, funded by the Sloan Foundation and held at the University of Maryland College Park, in November, 2019 ( Scientific Software Registry Collaboration Workshop ). A goal of the workshop was to develop the final recommendations on best practices for repositories and registries to the FORCE11 SCIWG. The workshop included participants outside the Task Force resulting in a broader set of contributions to the final list. In 2020, this group made additional refinements to the best practices during virtual meetings and through online collaborative writing producing in the guidelines described in the next section. The Task Force then transitioned into the SciCodes consortium ( http://scicodes.net ). SciCodes is a permanent community for research software registries and repositories with a particular focus on these best practices. SciCodes continued to collect information about involved registries and repositories, which are listed in Appendix C. We also include some analysis of the number of entries and date of creation of member resources. Appendix A lists the people who participated in these efforts.
Best practices for repositories and registries
Our recommendations are provided as nine separate policies or statements, each presented below with an explanation as to why we recommend the practice, what the practice describes, and specific considerations to take into account. The last paragraph of each best practice includes one or two examples and a link to Appendix B, which contains many examples from different registries and repositories.
These nine best practices, though not an exhaustive list, are applicable to the varied resources represented in the Task Force, so are likely to be broadly applicable to other scientific software repositories and registries. We believe that adopting these practices will help document, guide, and preserve these resources, and put them in a stronger position to serve their disciplines, users, and communities 1 .
Provide a public scope statement
The landscape of research software is diverse and complex due to the overlap between scientific domains, the variety of technical properties and environments, and the additional considerations resulting from funding, authors’ affiliation, or intellectual property. A scope statement clarifies the type of software contained in the repository or indexed in the registry. Precisely defining a scope, therefore, helps those users of the resource who are looking for software to better understand the results they obtained.
Moreover, given that many of these resources accept submission of software packages, providing a precise and accessible definition will help researchers determine whether they should register or deposit software, and curators by making clear what is out of scope for the resource. Overall, a public scope manages the expectations of the potential depositor as well as the software seeker. It informs both what the resource does and does not contain.
The scope statement should describe:
What is accepted, and acceptable, based on criteria covering scientific discipline, technical characteristics, and administrative properties
What is not accepted, i.e. , characteristics that preclude their incorporation in the resource
Notable exceptions to these rules, if any
Particular criteria of relevance include the scientific community being served and the types of software listed in the registry or stored in the repository, such as source code, compiled executables, or software containers. The scope statement may also include criteria that must be satisfied by accepted software, such as whether certain software quality metrics must be fulfilled or whether a software project must be used in published research. Availability criteria can be considered, such as whether the code has to be publicly available, be in the public domain and/or have a license from a predefined set, or whether software registered in another registry or repository will be accepted.
An illustrating example of such a scope statement is the editorial policy ( https://ascl.net/wordpress/submissions/editiorial-policy/ ) published by the Astrophysics Source Code Library (ASCL) ( Allen et al., 2013 ), which states that it includes only software source code used in published astronomy and astrophysics research articles, and specifically excludes software available only as a binary or web service. Though the ASCL’s focus is on research documented in peer-reviewed journals, its policy also explicitly states that it accepts source code used in successful theses. Other examples of scope statements can be found in Appendix B.
Provide guidance for users
Users accessing a resource to search for entries and browse or retrieve the description(s) of one or more software entries have to understand how to perform such actions. Although this guideline potentially applies to many public online resources, especially research databases, the potential complexity of the stored metadata and the curation mechanisms can seriously impede the understandability and usage of software registries and repositories.
User guidance material may include:
How to perform common user tasks, such as searching the resource, or accessing the details of an entry
Answers to questions that are often asked or can be anticipated, e.g ., with Frequently Asked Questions or tips and tricks pages
Who to contact for questions or help
A separate section in these guidelines on the Conditions of use policy covers terms of use of the resource and how best to cite records in a resource and the resource itself.
Guidance for users who wish to contribute software is covered in the next section, Provide guidance to software contributors .
When writing guidelines for users, it is advisable to identify the types of users your resource has or could potentially have and corresponding use cases. Guidance itself should be offered in multiple forms, such as in-field prompts, linked explanations, and completed examples. Any machine-readable access, such as an API, should be fully described directly in the interface or by providing a pointer to existing documentation, and should specify which formats are supported ( e.g ., JSON-LD, XML) through content negotiation if it is enabled.
Examples of such elements include, for instance, the bio.tools registry ( Ison et al., 2019 ) API user guide ( https://biotools.readthedocs.io/en/latest/api_usage_guide.html ), or the ORNL DAAC ( ORNL, 2013 ) instructions for data providers ( https://daac.ornl.gov/submit/ ). Additional examples of user guidance can be found in Appendix B.
Provide guidance to software contributors
Most software registries and repositories rely on a community model, whereby external contributors will provide software entries to the resource. The scope statement will already have explained what is accepted and what is not; the contributor policy addresses who can add or change software entries and the processes involved.
The contributor policy should therefore describe:
Who can or cannot submit entries and/or metadata
Required and optional metadata expected for deposited software
Review process, if any
Curation process, if any
Procedures for updates ( e.g ., who can do it, when it is done, how is it done)
Topics to consider when writing a contributor policy include whether the author(s) of a software entry will be contacted if the contributor is not also an author and whether contact is a condition or side-effect of the submission. Additionally, a contributor policy should specify how persistent identifiers are assigned (if used) and should state that depositors must comply with all applicable laws and not be intentionally malicious.
Such material is provided in resources such as the Computational Infrastructure for Geodynamics ( Hwang & Kellogg, 2017 ) software contribution checklist ( https://github.com/geodynamics/best_practices/blob/master/ContributingChecklist.md#contributing-software ) and the CoMSES Net Computational Model Library ( Janssen et al., 2008 ) model archival tutorial ( https://forum.comses.net/t/archiving-your-model-1-gettingstarted/7377 ). Additional examples of guidance for software contributors can be found in Appendix B.
Establish an authorship policy
Because research software is often a research product, it is important to report authorship accurately, as it allows for proper scholarly credit and other types of attributions ( Smith, Katz & Niemeyer, 2016 ). However, even though authorship should be defined at the level of a given project, it can prove complicated to determine ( Alliez et al., 2019 ). Roles in software development can widely vary as contributors change with time and versions, and contributions are difficult to gauge beyond the “commit,” giving rise to complex situations. In this context, establishing a dedicated policy ensures that people are given due credit for their work. The policy also serves as a document that administrators can turn to in case disputes arise and allows proactive problem mitigation, rather than having to resort to reactive interpretation. Furthermore, having an authorship policy mirrors similar policies by journals and publishers and thus is part of a larger trend. Note that the authorship policy will be communicated at least partially to users through guidance provided to software contributors. Resource maintainers should ensure this policy remains consistent with the citation policies for the registry or repository (usually, the citation requirements for each piece of research software are under the authority of its owners).
The authorship policy should specify:
How authorship is determined e.g ., a stated criteria by the contributors and/or the resource
Policies around making changes to authorship
The conflict resolution processes adopted to handle authorship disputes
When defining an authorship policy, resource maintainers should take into consideration whether those who are not coders, such as software testers or documentation maintainers, will be identified or credited as authors, as well as criteria for ordering the list of authors in cases of multiple authors, and how the resource handles large numbers of authors and group or consortium authorship. Resources may also include guidelines about how changes to authorship will be handled so each author receives proper credit for their contribution. Guidelines can help facilitate determining every contributors’ role. In particular, the use of a credit vocabulary, such as the Contributor Roles Taxonomy ( Allen, O’Connell & Kiermer, 2019 ), to describe authors’ contributions should be considered for this purpose ( http://credit.niso.org/ ).
An example of authorship policy is provided in the Ethics Guidelines ( https://joss.theoj.org/about#ethics ) and the submission guide authorship section ( https://joss.readthedocs.io/en/latest/submitting.html#authorship ) of the Journal of Open Source Software ( Katz, Niemeyer & Smith, 2018 ), which provides rules for inclusion in the authors list. Additional examples of authorship policies can be found in Appendix B.
Document and share your metadata schema
The structure and semantics of the information stored in registries and repositories is sometimes complex, which can hinder the clarity, discovery, and reuse of the entries included in these resources. Publicly posting the metadata schema used for the entries helps individual and organizational users interested in a resource’s information understand the structure and properties of the deposited information. The metadata structure helps to inform users how to interact with or ingest records in the resource. A metadata schema mapped to other schemas and an API specification can improve the interoperability between registries and repositories.
This practice should specify:
The schema used and its version number. If a standard or community schema, such as CodeMeta ( Jones et al., 2017 ) or schema.org ( Guha, Brickley & Macbeth, 2016 ) is used, the resource should reference its documentation or official website. If a custom schema is used, formal documentation such as a description of the schema and/or a data dictionary should be provided.
Expected metadata when submitting software, including which fields are required and which are optional, and the format of the content in each field.
To improve the readability of the metadata schema and facilitate its translation to other standards, resources may provide a mapping (from the metadata schema used in the resource) to published standard schemas, through the form of a “cross-walk” ( e.g ., the CodeMeta cross-walk ( https://codemeta.github.io/crosswalk/ )) and include an example entry from the repository that illustrates all the fields of the metadata schema. For instance, extensive documentation ( https://biotoolsschema.readthedocs.io/en/latest/ ) is available for the biotoolsSchema ( Ison et al., 2021 ) format, which is used in the bio.tools registry. Another example is the OntoSoft vocabulary ( http://ontosoft.org/software ), used by the OntoSoft registry ( Gil, Ratnakar & Garijo, 2015 ; Gil et al., 2016 ) and available in both machine-readable and human readable formats. Additional examples of metadata schemas can be found in Appendix B.
Stipulate conditions of use
The conditions of use document the terms under which users may use the contents provided by a website. In the case of software registries and repositories, these conditions should specifically state how the metadata regarding the entities of a resource can be used, attributed, and/or cited, and provide information about the licenses used for the code and binaries. This policy can forestall potential liabilities and difficulties that may arise, such as claims of damage for misinterpretation or misapplication of metadata. In addition, the conditions of use should clearly state how the metadata can and cannot be used, including for commercial purposes and in aggregate form.
This document should include:
Legal disclaimers about the responsibility and liability borne by the registry or repository
License and copyright information, both for individual entries and for the registry or repository as a whole
Conditions for the use of the metadata, including prohibitions, if any
Preferred format for citing software entries
Preferred format for attributing or citing the resource itself
When writing conditions of use, resource maintainers might consider what license governs the metadata, if licensing requirements apply for findings and/or derivatives of the resource, and whether there are differences in the terms and license for commercial vs noncommercial use. Restrictions on the use of the metadata may also be included, as well as a statement to the effect that the registry or repository makes no guarantees about completeness and is not liable for any damages that could arise from the use of the information. Technical restrictions, such as conditions of use of the API (if one is available), may also be mentioned.
Conditions of use can be found for instance for DOE CODE ( Ensor et al., 2017 ), which in addition to the general conditions of use ( https://www.osti.gov/disclaim ) specifies that the rules for usage of the hosted code ( https://www.osti.gov/doecode/faq#are-there-restrictions ) are defined by their respective licenses. Additional examples of conditions of use policies can be found in Appendix B.
State a privacy policy
Privacy policies define how personal data about users are stored, processed, exchanged or removed. Having a privacy policy demonstrates a strong commitment to the privacy of users of the registry or repository and allows the resource to comply with the legal requirement of many countries in addition to those a home institution and/or funding agencies may impose.
The privacy policy of a resource should describe:
What information is collected and how long it is retained
How the information, especially any personal data, is used
Whether tracking is done, what is tracked, and how ( e.g ., Google Analytics)
Whether cookies are used
When writing a privacy policy, the specific personal data which are collected should be detailed, as well as the justification for their resource, and whether these data are sold and shared. Additionally, one should list explicitly the third-party tools used to collect analytic information and potentially reference their privacy policies. If users can receive emails as a result of visiting or downloading content, such potential solicitations or notifications should be announced. Measures taken to protect users’ privacy and whether the resource complies with the European Union Directive on General Data Protection Regulation ( https://gdpr-info.eu/ ) (GDPR) or other local laws, if applicable, should be explained 2 . As a precaution, the statement can reserve the right to make changes to this privacy policy. Finally, a mechanism by which users can request the removal of such information should be described.
For example, the SciCrunch’s ( Grethe et al., 2014 ) privacy policy ( https://scicrunch.org/page/privacy ) details what kind of personal information is collected, how it is collected, and how it may be reused, including by third-party websites through the use of cookies. Additional examples of privacy policies can be found in Appendix B.
Provide a retention policy
Many software registries and repositories aim to facilitate the discovery and accessibility of the objects they describe, e.g ., enabling search and citation, by making the corresponding records permanently accessible. However, for various reasons, even in such cases maintainers and curators may have to remove records. Common examples include removing entries that are outdated, no longer meet the scope of the registry, or are found to be in violation of policies. The resource should therefore document retention goals and procedures so that users and depositors are aware of them.
The retention policy should describe:
The length of time metadata and/or files are expected to be retained;
Under what conditions metadata and/or files are removed;
Who has the responsibility and ability to remove information;
Procedures to request that metadata and/or files be removed.
The policy should take into account whether best practices for persistent identifiers are followed, including resolvability, retention, and non-reuse of those identifiers. The retention time provided by the resource should not be too prescriptive ( e.g ., “for the next 10 years”), but rather it should fit within the context of the underlying organization(s) and its funding. This policy should also state who is allowed to edit metadata, delete records, or delete files, and how these changes are performed to preserve the broader consistency of the registry. Finally, the process by which data may be taken offline and archived as well as the process for its possible retrieval should be thoroughly documented.
As an example, Bioconductor ( Gentleman et al., 2004 ) has a deprecation process through which software packages are removed if they cannot be successfully built or tested, or upon specific request from the package maintainer. Their policy ( https://bioconductor.org/developers/package-end-of-life/ ) specifies who initiates this process and under which circumstances, as well as the successive steps that lead to the removal of the package. Additional examples of retention policies can be found in Appendix B.
Disclose your end-of-life policy
Despite their usefulness, the long-term maintenance, sustainability, and persistence of online scientific resources remains a challenge, and published web services or databases can disappear after a few years ( Veretnik, Fink & Bourne, 2008 ; Kern, Fehlmann & Keller, 2020 ). Sharing a clear end-of-life policy increases trust in the community served by a registry or repository. It demonstrates a thoughtful commitment to users by informing them that provisions for the resource have been considered should the resource close or otherwise end its services for its described artifacts. Such a policy sets expectations and provides reassurance as to how long the records within the registry will be findable and accessible in the future.
This policy should describe:
Under what circumstances the resource might end its services;
What consequences would result from closure;
What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure;
If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation;
How a migration will be funded.
Publishing an end-of-life policy is an opportunity to consider, in the event a resource is closed, whether the records will remain available, and if so, how and for whom, and under which conditions, such as archived status or “read-only.” The restrictions applicable to this policy, if any, should be considered and detailed. Establishing a formal agreement or memorandum of understanding with another registry, repository, or institution to receive and preserve the data or project, if applicable, might help to prepare for such a liability.
Examples of such policies include the Zenodo end-of-life policy ( https://help.zenodo.org/ ), which states that if Zenodo ceases its services, the data hosted in the resource will be migrated and the DOIs provided would be updated to resolve to the new location (currently unspecified). Additional examples of end-of-life policies can be found in Appendix B.
A summary of the practices presented in this section can be found in Table 2 .
Table 2. Summary of the best practices with recommendations and examples.
The best practices described above serve as a guide for repositories and registries to provide better service to their users, ranging from software developers and researchers to publishers and search engines, and enable greater transparency about the operation of their described resources. Implementing our practices provides users with significant information about how different resources operate, while preserving important institutional knowledge, standardizing expectations, and guiding user interactions.
For instance, a public scope statement and guidance for users may directly impact usability and, thus, the popularity of the repository. Resources including tools with a simple design and unambiguous commands, as well as infographic guides or video tutorials, ease the learning curve for new users. The guidance for software contributions, conditions of use, and sharing the metadata schema used may help eager users contribute new functionality or tools, which may also help in creating a community around a resource. A privacy policy has become a requirement across geographic boundaries and legal jurisdictions. An authorship policy is critical in facilitating collaborative work among researchers and minimizing the chances for disputes. Finally, retention and end-of-life policies increase the trust and integrity of a repository service.
Policies affecting a single community or domain were deliberately omitted when developing the best practices. First, an exhaustive list would have been a barrier to adoption and not applicable to every repository since each has a different perspective, audience, and motivation that drives policy development for their organization. Second, best practices that regulate the content of a resource are typically domain-specific to the artifact and left to resources to stipulate based on their needs. Participants in the 2019 Scientific Software Registry Collaboration Workshop were surprised to find that only four metadata elements were shared by all represented resources 3 . The diversity of our resources precludes prescriptive requirements, such as requiring specific metadata for records, so these were also deliberately omitted in the proposed best practices.
Hence, we focused on broadly applicable practices considered important by various resources. For example, amongst the participating registries and repositories, very few had codes of conduct that govern the behavior of community members. Codes of conduct are warranted if resources are run as part of a community, especially if comments and reviews are solicited for deposits. In contrast, a code of conduct would be less useful for resources whose primary purpose is to make software and software metadata available for reuse. However, this does not negate their importance and their inclusion as best practices in other arenas concerning software.
As noted by the FAIR4RS movement, software is different than data, motivating the need for a separate effort to address software resources ( Lamprecht et al., 2020 ; Katz et al., 2016 ). Even so, there are some similarities, and our effort complements and aligns well with recent guidelines developed in parallel to increase the transparency, responsibility, user focus, sustainability, and technology of data repositories. For example, both the TRUST Principles ( Lin et al., 2020 ) and CoreTrustSeal Requirements ( CoreTrustSeal, 2019 ) call for a repository to provide information on its scope and list the terms of use of its metadata to be considered compliant with TRUST or CoreTrustSeal, which aligns with our practices “ Provide a public scope statement ” and “ Stipulate conditions of use ”. CoreTrustSeal and TRUST also require that a repository consider continuity of access, which we have expressed as the practice to “ Disclosing your end-of-life policy ”. Our best practices differ in that they do not address, for example, staffing needs nor professional development for staff, as CoreTrustSeal requires, nor do our practices address protections against cyber or physical security threats, as the TRUST principles suggest. Inward-facing policies, such as documenting internal workflows and practices, are generally good in reducing operational risks, but internal management practices were considered out of scope of our guidelines.
Figure 1 shows the number of resources that support (partially or in their totality) each best practice. Though we see the proposed best practices as critical, many of the repositories that have actively participated in the discussions (14 resources in total) have yet to implement every one of them. We have observed that the first three practices (providing public scope statement, add guidance for users and for software contributors) have the widest adoption, while the retention, end-of-life, and authorship policy the least. Understanding the lag in the implementation across all of the best practices requires further engagement with the community.
Figure 1. Number of resources supporting each best practice, out of 14 resources.
Improving the adoption of our guidelines is one of the goals of SciCodes ( http://scicodes.net ), a recent consortium of scientific software registries and repositories. SciCodes evolved from the Task Force as a permanent community to continue the dialogue and share information between domains, including sharing of tools and ideas. SciCodes has also prioritized improving software citation (complementary to the efforts of the FORCE11 SCIWG) and tracking the impact of metadata and interoperability. In addition, SciCodes aims to understand barriers to implementing policies, ensure consistency between various best practices, and continue advocacy for software support by continuing dialogue between registries, repositories, researchers, and other stakeholders.
Conclusions
The dissemination and preservation of research material, where repositories and registries play a key role, lies at the heart of scientific advancement. This article introduces nine best practices for research software registries and repositories. The practices are an outcome of a Task Force of the FORCE11 Software Citation Implementation Working Group and reflect the discussion, collaborative experiences, and consensus of over 30 experts and 14 resources.
The best practices are non-prescriptive, broadly applicable, and include examples and guidelines for their adoption by a community. They specify establishing the working domain (scope) and guidance for both users and software contributors, address legal concerns with privacy, use, and authorship policies, enhance usability by encouraging metadata sharing, and set expectations with retention and end-of-life policies. However, we believe additional work is needed to raise awareness and adoption across resources from different scientific disciplines. Through the SciCodes consortium, our goal is to continue implementing these practices more uniformly in our own registries and repositories and reduce the burdens of adoption. In addition to completing the adoption of these best practices, SciCodes will address topics such as tracking the impact of good metadata, improving interoperability between registries, and making our metadata discoverable by search engines and services such as Google Scholar, ORCID, and discipline indexers.
APPENDIX A: CONTRIBUTORS
The following people contributed to the development of this article through participation in the Best Practices Task Force meetings, 2019 Scientific Software Registry Collaboration Workshop, and/or SciCodes Consortium meetings:
Alain Monteil , Inria, HAL ; Software Heritage
Alejandra Gonzalez-Beltran , Science and Technology Facilities Council, UK Research and Innovation, Science and Technology Facilities Council
Alexandros Ioannidis , CERN, Zenodo
Alice Allen , University of Maryland, College Park, Astrophysics Source Code Library
Allen Lee , Arizona State University, CoMSES Net Computational Model Library
Ana Trisovic , Harvard University, DataVerse
Anita Bandrowski , UCSD, SciCrunch
Bruce E. Wilson , Oak Ridge National Laboratory, ORNL Distributed Active Archive Center for Biogeochemical Dynamics
Bryce Mecum , NCEAS, UC Santa Barbara, CodeMeta
Caifan Du , iSchool, University of Texas at Austin, CiteAs
Carly Robinson , US Department of Energy, Office of Scientific and Technical Information, DOE CODE
Daniel Garijo , Universidad Politécnica de Madrid (formerly at Information Sciences Institute, University of Southern California), Ontosoft
Daniel S. Katz , University of Illinois at Urbana-Champaign, Associate EiC for JOSS, FORCE11 Software Citation Implementation Working Group , co-chair
David Long , Brigham Young University, IEEE GRS Remote Sensing Code Library
Genevieve Milliken , NYU Bobst Library, IASGE
Hervé Ménager , Hub de Bioinformatique et Biostatistique—Département Biologie Computationnelle, Institut Pasteur, ELIXIR bio.tools
Jessica Hausman , Jet Propulsion Laboratory, PO.DAAC
Jurriaan H. Spaaks , Netherlands eScience Center, Research Software Directory
Katrina Fenlon , University of Maryland, iSchool
Kristin Vanderbilt , Environmental Data Initiative, IMCR
Lorraine Hwang , University California Davis, Computational Infrastructure for Geodynamics
Lynn Davis , US Department of Energy, Office of Scientific and Technical Information, DOE CODE
Martin Fenner , Front Matter (formerly at DataCite), FORCE11 Software Citation Implementation Working Group , co-chair
Michael R. Crusoe , CWL, Debian-Med
Michael Hucka , California Institute of Technology, SBML ; COMBINE
Mingfang Wu , Australian Research Data Commons, Australian Research Data Commons
Morane Gruenpeter , Inria, Software Heritage
Moritz Schubotz , FIZ Karlsruhe - Leibniz-Institute for Information Infrastructure, swMATH
Neil Chue Hong , Software Sustainability Institute/University of Edinburgh, Software Sustainability Institute ; FORCE11 Software Citation Implementation Working Group , co-chair
Pete Meyer , Harvard Medical School, SBGrid ; BioGrids
Peter Teuben , University of Maryland, College Park, Astrophysics Source Code Library
Piotr Sliz , Harvard Medical School, SBGrid ; BioGrids
Sara Studwell , US Department of Energy, Office of Scientific and Technical Information, DOE CODE
Shelley Stall , American Geophysical Union, AGU Data Services
Stephan Druskat , German Aerospace Center (DLR)/University Jena/Humboldt-Universität zu Berlin, Citation File Format
Ted Carnevale, Neuroscience Department, Yale University, ModelDB
Tom Morrell , Caltech Library, CaltechDATA
Tom Pollard , MIT/PhysioNet, PhysioNet
APPENDIX B: POLICY EXAMPLES
Scope statement.
• Astrophysics Source Code Library. (n.d.). Editorial policy .
https://ascl.net/wordpress/submissions/editiorial-policy/
• bio.tools. (n.d.). Curators Guide .
https://biotools.readthedocs.io/en/latest/curators_guide.html
• Caltech Library. (2017). Terms of Deposit .
https://data.caltech.edu/terms
• Caltech Library. (2019). CaltechDATA FAQ .
https://www.library.caltech.edu/caltechdata/faq
• Computational Infrastructure for Geodynamics. (n.d.). Code Donation .
https://geodynamics.org/cig/dev/code-donation/
• CoMSES Net Computational Model Library. (n.d.). Frequently Asked Questions .
https://www.comses.net/about/faq/#model-library
• ORNL DAAC for Biogeochemical Dynamics. (n.d.). Data Scope and Acceptance Policy .
https://daac.ornl.gov/submit/
• RDA Registry and Research Data Australia. (2018). Collection . ARDC Intranet.
https://intranet.ands.org.au/display/DOC/Collection
• Remote Sensing Code Library. (n.d.). Submit .
https://rscl-grss.org/submit.php
• SciCrunch. (n.d.). Curation Guide for SciCrunch Registry .
https://scicrunch.org/page/Curation%20Guidelines
• U.S. Department of Energy: Office of Scientific and Technical Information. (n.d.-a). DOE CODE: Software Policy . https://www.osti.gov/doecode/policy
• U.S. Department of Energy: Office of Scientific and Technical Information. (n.d.-b). FAQs . OSTI.GOV.
https://www.osti.gov/faqs
Guidance for users
• Astrophysics Source Code Library. (2021). Q & A
https://ascl.net/home/getwp/898
• bio.tools. (2021). API Reference
https://biotools.readthedocs.io/en/latest/api_reference.html
• Harvard Dataverse. (n.d.). Curation and Data Management Services
https://support.dataverse.harvard.edu/curation-services
• OntoSoft. (n.d.). An Intelligent Assistant for Software Publication
https://ontosoft.org/users.html
• ORNL DAAC for Biogeochemical Dynamics. (n.d.). Learning
https://daac.ornl.gov/resources/learning/
• U.S. Department of Energy: Office of Scientific and Technical Information. (n.d.). FAQs . OSTI.GOV.
https://www.osti.gov/doecode/faq
Guidance for software contributors
• Astrophysics Source Code Library. (n.d.) Submit a code .
https://ascl.net/code/submit
• bio.tools. (n.d.) Quick Start Guide
https://biotools.readthedocs.io/en/latest/quickstart_guide.html
• Computational Infrastructure for Geodynamics. Contributing Software
https://geodynamics.org/cig/dev/code-donation/checklist/
• CoMSES Net Computational Model Library (2019) Archiving your model: 1. Getting Started
https://forum.comses.net/t/archiving-your-model-1-getting-started/7377
• Harvard Dataverse. (n.d.) For Journals .
https://support.dataverse.harvard.edu/journals
• Committee on Publication Ethics: COPE. (2020a). Authorship and contributorship .
https://publicationethics.org/authorship
• Committee on Publication Ethics: COPE. (2020b). Core practices .
https://publicationethics.org/core-practices
• Dagstuhl EAS Specification Draft. (2016). The Software Credit Ontology .
https://dagstuhleas.github.io/SoftwareCreditRoles/doc/index-en.html#
• Journal of Open Source Software. (n.d.). Ethics Guidelines .
https://joss.theoj.org/about#ethics
• ORNL DAAC (n.d) Authorship Policy .
• PeerJ Journals. (n.d.-a). Author Policies .
https://peerj.com/about/policies-and-procedures/#author-policies
• PeerJ Journals. (n.d.-b). Publication Ethics .
https://peerj.com/about/policies-and-procedures/#publication-ethics
• PLOS ONE. (n.d.). Authorship .
https://journals.plos.org/plosone/s/authorship
• National Center for Data to Health. (2019). The Contributor Role Ontology.
https://github.com/data2health/contributor-role-ontology
Metadata schema
• ANDS: Australian National Data Service. (n.d.). Metadata . ANDS.
https://www.ands.org.au/working-with-data/metadata
• ANDS: Australian National Data Service. (2016). ANDS Guide: Metadata .
https://www.ands.org.au/data/assets/pdf_file/0004/728041/Metadata-Workinglevel.pdf
• Bernal, I. (2019). Metadata for Data Repositories .
https://doi.org/10.5281/zenodo.3233486
• bio.tools. (2020). Bio-tools/biotoolsSchema [HTML].
https://github.com/bio-tools/biotoolsSchema (Original work published 2015)
• bio.tools. (2019). BiotoolsSchema documentation .
https://biotoolsschema.readthedocs.io/en/latest/
• The CodeMeta crosswalks. (n.d.)
https://codemeta.github.io/crosswalk/
• Citation File Format (CFF). (n.d.)
https://doi.org/10.5281/zenodo.1003149
• The DataVerse Project. (2020). DataVerse 4.0+ Metadata Crosswalk.
https://docs.google.com/spreadsheets/d/10Luzti7svVTVKTA-px27oq3RxCUM-QbiTkm8iMd5C54
• OntoSoft. (2015). OntoSoft Ontology .
https://ontosoft.org/ontology/software/
• Zenodo. (n.d.-a). Schema for Depositing .
https://zenodo.org/schemas/records/record-v1.0.0.json
• Zenodo. (n.d.-b). Schema for Published Record .
https://zenodo.org/schemas/deposits/records/legacyrecord.json
Conditions of use policy
• Allen Institute. (n.d.). Terms of Use .
https://alleninstitute.org/legal/terms-use/
• Europeana. (n.d.). Usage Guidelines for Metadata . Europeana Collections.
https://www.europeana.eu/portal/en/rights/metadata.html
• U.S. Department of Energy: Office of Scientific and Technical Information. (n.d.). DOE CODE FAQ: Are there restrictions on the use of the material in DOE CODE?
https://www.osti.gov/doecode/faq#are-there-restrictions
• Zenodo. (n.d.). Terms of Use .
https://about.zenodo.org/terms/
Privacy policy
• Allen Institute. (n.d.). Privacy Policy .
https://alleninstitute.org/legal/privacy-policy/
• CoMSES Net. (n.d.). Data Privacy Policy .
https://www.comses.net/about/data-privacy/
• Nature. (2020). Privacy Policy .
https://www.nature.com/info/privacy
• Research Data Australia. (n.d.). Privacy Policy .
https://researchdata.ands.org.au/page/privacy
• SciCrunch. (2018). Privacy Policy . SciCrunch.
https://scicrunch.org/page/privacy
• Science Repository. (n.d.). Privacy Policies .
https://www.sciencerepository.org/privacy
• Zenodo. (n.d.). Privacy policy .
https://about.zenodo.org/privacy-policy/
Retention policy
• Bioconductor. (2020). Package End of Life Policy .
https://bioconductor.org/developers/package-end-of-life/
• Caltech Library. (n.d.). CaltechDATA FAQ .
• CoMSES Net Computational Model Library. (n.d.). How long will models be stored in the Computational Model Library?
https://www.comses.net/about/faq/
• Dryad. (2020). Dryad FAQ - Publish and Preserve your Data .
https://datadryad.org/stash/faq#preserved
• Software Heritage. (n.d.). Content policy .
https://www.softwareheritage.org/legal/content-policy/
• Zenodo. (n.d.). General Policies v1.0 .
https://about.zenodo.org/policies/
End-of-life policy
• Figshare. (n.d.). Preservation and Continuity of Access Policy .
https://knowledge.figshare.com/articles/item/preservation-and-continuity-of-access-policy
• Open Science Framework. (2019). FAQs . OSF Guides.
http://help.osf.io/hc/en-us/articles/360019737894-FAQs
• NASA Earth Science Data Preservation Content Specification (n.d.)
https://earthdata.nasa.gov/esdis/eso/standards-and-references/preservation-content-spec
• Zenodo. (n.d.). Frequently Asked Questions .
https://help.zenodo.org/
APPENDIX C: RESOURCE INFORMATION
Since the first Task Force meeting was held in 2019, we have asked new resource representatives joining our community to provide the information shown in Table C.1 . Thanks to this effort, the group has been able to learn about each resource, identify similarities and differences, and thus better inform our meeting discussions.
Tables C.2 – C.4 provide an updated overview of the main features of all resources currently involved in the discussion and implementation of the best practices (30 resources in total as of December, 2021). Participating resources are diverse, and belong to a variety of discipline-specific ( e.g. , neurosciences, biology, geosciences, etc .) and domain generic repositories. Curated resources tend to have a lower number of software entries. Most resources have been created in the last 20 years, with the oldest resource dating from 1991. Most resources accept a software deposit, support DOIs to identify their entries, are actively curated, and can be used to cite software.
Table C.3. Number of entries described in the resources of the SciCodes consortium, by December 2021.
Acknowledgments.
The best practices presented here were proposed and developed by a Task Force of the FORCE11 Software Citation Implementation Working Group. The following authors, randomly ordered, contributed equally to discussion, conceptualization, writing, reviewing, and editing this article: Daniel Garijo, Lorraine Hwang, Hervé Ménager, Alice Allen, Michael Hucka, Thomas Morrell, and Ana Trisovic.
Task Force on Best Practices for Software Registries participants : Alain Monteil, Alejandra Gonzalez-Beltran, Alexandros Ioannidis, Alice Allen, Allen Lee, Andre Jackson, Bryce Mecum,Caifan Du, Carly Robinson, Daniel Garijo, Daniel Katz, Genevieve Milliken, Hervé Ménager, Jurriaan Spaaks, Katrina Fenlon, Kristin Vanderbilt, Lorraine Hwang, Michael Hucka, Neil Chue Hong, P. Wesley Ryan, Peter Teuben, Shelley Stall, Stephan Druskat, Ted Carnevale, Thomas Morrell.
SciCodes Consortium participants : Alain Monteil, Alejandra Gonzalez-Beltran, Alexandros Ioannidis, Alice Allen, Allen Lee, Ana Trisovic, Anita Bandrowski, Bruce Wilson, Bryce Mecum, Carly Robinson, Celine Sarr, Colin Smith, Daniel Garijo, David Long, Harry Bhadeshia, Hervé Mé nager, Jeanette M. Sperhac, Joy Ku, Jurriaan Spaaks, Kristin Vanderbilt, Lorraine Hwang, Matt Jones, Mercé Crosas, Michael R. Crusoe, Mike Hucka, Ming Fang Wu, Morane Gruenpeter, Moritz Schubotz, Olaf Teschke, Pete Meyer, Peter Teuben, Piotr Sliz, Sara Studwell, Shelley Stall, Ted Carnevale, Tom Morrell, Tom Pollard, Wolfram Sperber.
Funding Statement
This work was supported by the Alfred P. Sloan Foundation (Grant Number G-2019-12446), and the Heidelberg Institute of Theoretical Studies. Ana Trisovic is funded by the Alfred P. Sloan Foundation (Grant Number P-2020-13988). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Please note that the information provided in this article does not constitute legal advice.
In the case of GDPR, the regulation applies to all European user personal data, even if the resource is not located in Europe.
The elements were: software name, description, keywords, and URL.
Additional Information and Declarations
Competing interests.
The authors declare that they have no competing interests.
Author Contributions
Daniel Garijo conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Hervé Ménager conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Lorraine Hwang conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Ana Trisovic conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Michael Hucka conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Thomas Morrell conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Alice Allen conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Data Availability
The following information was supplied regarding data availability:
There is no data or code associated with this publication.
- Allen et al. (2013). Allen A, DuPrie K, Berriman B, Hanisch RJ, Mink J, Teuben PJ. Astrophysics source code library. Astronomical Data Analysis Software and Systems XXII. 2013;475:387. [ Google Scholar ]
- Allen, O’Connell & Kiermer (2019). Allen L, O’Connell A, Kiermer V. How can we ensure visibility and diversity in research contributions? How the contributor role taxonomy (credit) is helping the shift from authorship to contributorship. Learned Publishing. 2019;32(1):71–74. doi: 10.1002/leap.1210. [ DOI ] [ Google Scholar ]
- Allen & Schmidt (2015). Allen A, Schmidt J. Looking before leaping: creating a software registry. Journal of Open Research Software. 2015;3(1):e15. doi: 10.5334/jors.bv. [ DOI ] [ Google Scholar ]
- Alliez et al. (2019). Alliez P, Di Cosmo R, Guedj B, Girault A, Hacid M-S, Legrand A, Rougier NP. Attributing and referencing (research) software: best practices and outlook from inria. Computing in Science and Engineering. 2019;22(1):1–14. doi: 10.1109/MCSE.2019.2949413. [ DOI ] [ Google Scholar ]
- Australian Research Council (2018). Australian Research Council ARC open access policy. 2018. https://www.arc.gov.au/policies-strategies/policy/arc-open-access-policy https://www.arc.gov.au/policies-strategies/policy/arc-open-access-policy
- Baker (2016). Baker M. 1,500 scientists lift the lid on reproducibility. Nature News. 2016;533(7604):452–454. doi: 10.1038/533452a. [ DOI ] [ PubMed ] [ Google Scholar ]
- Barnes (2010). Barnes N. Publish your computer code: it is good enough. Nature. 2010;467(7317):753. doi: 10.1038/467753a. [ DOI ] [ PubMed ] [ Google Scholar ]
- Baruch (2007). Baruch P. Open access developments in France: the HAL open archives system. Learned Publishing. 2007;20(4):267–282. doi: 10.1087/095315107X239636. [ DOI ] [ Google Scholar ]
- Berman & Crosas (2020). Berman F, Crosas M. The research data alliance: benefits and challenges of building a community organization. Harvard Data Science Review. 2020;2(1) doi: 10.1162/99608f92.5e126552. [ DOI ] [ Google Scholar ]
- Bourne et al. (2012). Bourne PE, Clark TW, Dale R, de Waard A, Herman I, Hovy EH, Shotton D. Improving the future of research communications and e-scholarship (Dagstuhl Perspectives Workshop 11331) Dagstuhl Manifestos. 2012;1(1):41–60. doi: 10.4230/DagMan.1.1.41. [ DOI ] [ Google Scholar ]
- Brinckman et al. (2019). Brinckman A, Chard K, Gaffney N, Hategan M, Jones MB, Kowalik K, Kulasekaran S, Ludäscher B, Mecum BD, Nabrzyski J, Stodden V, Taylor IJ, Turk MJ, Turner K. Computing environments for reproducibility: capturing the “Whole Tale”. Future Generation Computer Systems. 2019;94:854–867. doi: 10.1016/j.future.2017.12.029. [ DOI ] [ Google Scholar ]
- CERN & OpenAIRE (2013). CERN and OpenAIRE Zenodo. https://doi.org/10.25495/7gxk-rd71 2013 [ Google Scholar ]
- Chen et al. (2019). Chen X, Dallmeier-Tiessen S, Dasler R, Feger S, Fokianos P, Gonzalez JB, Hirvonsalo H, Kousidis D, Lavasa A, Mele S, Rodriguez DR, Šimko T, Smith T, Trisovic A, Trzcinska A, Tsanaktsidis I, Zimmermann M, Cranmer K, Heinrich L, Watts G, Hildreth M, Lloret Iglesias L, Lassila-Perini K, Neubert S. Open is not enough. Nature Physics. 2019;15(2):113–119. doi: 10.1038/s41567-018-0342-2. [ DOI ] [ Google Scholar ]
- Chue Hong et al. (2021). Chue Hong NP, Katz DS, Barker M, Lamprecht A-L, Martinez C, Psomopoulos FE, Harrow J, Castro LJ, Gruenpeter M, Martinez PA, Honeyman T. FAIR principles for research software (FAIR4RS principles) Research Data Alliance. 2021;3(1):37–59. doi: 10.3233/DS-190026. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Clyburne-Sherin, Fei & Green (2019). Clyburne-Sherin A, Fei X, Green SA. Computational reproducibility via containers in psychology. Meta-Psychology. 2019;3:892. doi: 10.15626/MP.2018.892. [ DOI ] [ Google Scholar ]
- CoreTrustSeal (2019). CoreTrustSeal CoreTrustSeal trustworthy data repositories requirements 2020–2022. 2019. https://www.coretrustseal.org/why-certification/requirements/ https://www.coretrustseal.org/why-certification/requirements/
- Dashnow, Lonsdale & Bourne (2014). Dashnow H, Lonsdale A, Bourne PE. Ten simple rules for writing a PLOS ten simple rules article. PLOS Computational Biology. 2014;10(10):1–5. doi: 10.1371/journal.pcbi.1003858. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Di Cosmo & Zacchiroli (2017). Di Cosmo R, Zacchiroli S. Software heritage: why and how to preserve software source code. iPRES 2017 - 14th International Conference on Digital Preservation; Kyoto, Japan: 2017. pp. 1–10. [ Google Scholar ]
- Directorate-General for Research & Innovation (European Commission) (2018). Directorate-General for Research and Innovation (European Commission) Turning FAIR into reality: final report and action plan from the European Commission expert group on FAIR data. Luxembourg: Publications Office of the European Union; 2018. [ Google Scholar ]
- Du et al. (2021). Du C, Cohoon J, Priem J, Piwowar H, Meyer C, Howison J. Citeas: better software through sociotechnical change for better software citation. CSCW ’21: Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing.2021. [ Google Scholar ]
- Editorial Staff (2019). Editorial Staff Giving software its due. Nature Methods. 2019;16(3):207. doi: 10.1038/s41592-019-0350-x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Ensor et al. (2017). Ensor N, Stooksbury S, Smith A, Johnson LA, Vowell L, Martin M, Hensley M, Finkbeiner D, Robinson C, Knight K, Nelson J, Davis L, Lee I, Sherline C, Welsch T, Billings JJ, West MB, Sowers T, Watson A. Doe code. 2017. https://doi.org/10.11578/dc.20171031.3 https://doi.org/10.11578/dc.20171031.3
- Fox et al. (2021). Fox P, Erdmann C, Stall S, Griffies SM, Beal LM, Pinardi N, Hanson B, Friedrichs MAM, Feakins S, Bracco A, Pirenne B, Legg S. Data and Software Sharing Guidance for Authors Submitting to AGU Journals. 2021. [ DOI ]
- Frank et al. (2017). Frank RD, Chen Z, Crawford E, Suzuka K, Yakel E. Trust in qualitative data repositories. Proceedings of the Association for Information Science and Technology. 2017;54(1):102–111. doi: 10.1002/pra2.2017.14505401012. [ DOI ] [ Google Scholar ]
- Gentleman et al. (2004). Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, Hornik K, Hothorn T, Huber W, Iacus S, Irizarry R, Leisch F, Li C, Maechler M, Rossini AJ, Sawitzki G, Smith C, Smyth G, Tierney L, Yang JYH, Zhang J. Bioconductor: open software development for computational biology and bioinformatics. Genome Biology. 2004;5(10):1–16. doi: 10.1186/gb-2004-5-10-r80. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Gil et al. (2016). Gil Y, Garijo D, Mishra S, Ratnakar V. OntoSoft: a distributed semantic registry for scientific software. 2016 IEEE 12th International Conference on e-Science (e-Science); Piscataway: IEEE; 2016. pp. 331–336. [ Google Scholar ]
- Gil, Ratnakar & Garijo (2015). Gil Y, Ratnakar V, Garijo D. OntoSoft: capturing scientific software metadata. K-CAP 2015: Proceedings of the 8th International Conference on Knowledge; ACM Press; 2015. pp. 1–4. [ Google Scholar ]
- Grethe et al. (2014). Grethe JS, Bandrowski A, Banks DE, Condit C, Gupta A, Larson SD, Li Y, Ozyurt IB, Stagg AM, Whetzel PL, Marenco L, Miller P, Wang R, Shepherd GM, Martone ME. SciCrunch: a cooperative and collaborative data and resource discovery platform for scientific communities. Neuroinformatics. 2014;8:e00069. doi: 10.3389/conf.fninf.2014.18.00069. [ DOI ] [ Google Scholar ]
- Greuel & Sperber (2014). Greuel G-M, Sperber W. swMATH—an information service for mathematical software. In: Hong H, Yap C, editors. Mathematical Software—ICMS 2014. Berlin, Heidelberg: Springer; 2014. pp. 691–701. [ Google Scholar ]
- Grosbol & Tody (2010). Grosbol P, Tody D. Making access to astronomical software more efficient. ArXiv preprint. 2010. [ DOI ]
- Guha, Brickley & Macbeth (2016). Guha RV, Brickley D, Macbeth S. Schema.org: evolution of structured data on the web. Communications of the ACM. 2016;59(2):44–51. doi: 10.1145/2844544. [ DOI ] [ Google Scholar ]
- Hettrick (2018). Hettrick S. Software in research survey. Zenodo. 2018. https://doi.org/10.5281/zenodo.1183562 https://doi.org/10.5281/zenodo.1183562
- Howison & Herbsleb (2011). Howison J, Herbsleb JD. Scientific software production: incentives and collaboration. CSCW ’11: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work; New York, NY, USA: Association for Computing Machinery; 2011. pp. 513–522. [ Google Scholar ]
- Hwang & Kellogg (2017). Hwang L, Kellogg LH. CIG community standards and best practices for scientific software (Invited). SSA 2017 annual meeting announcement and program. Seismological Research Letters. 2017;88(2B):463–723. doi: 10.1785/0220170035. [ DOI ] [ Google Scholar ]
- Ince, Hatton & Graham-Cumming (2012). Ince DC, Hatton L, Graham-Cumming J. The case for open computer programs. Nature. 2012;482(7386):485–488. doi: 10.1038/nature10836. [ DOI ] [ PubMed ] [ Google Scholar ]
- Ison et al. (2019). Ison J, Ienasescu H, Chmura P, Rydza E, Ménager H, Kalaš M, Schwämmle V, Grüning B, Beard N, Lopez R, Duvaud S, Stockinger H, Persson B, Vařeková RS, Raček T, Vondrášek J, Peterson H, Salumets A, Jonassen I, Hooft R, Nyrönen T, Valencia A, Capella S, Gelpí J, Zambelli F, Savakis B, Leskošek B, Rapacki K, Blanchet C, Jimenez R, Oliveira A, Vriend G, Collin O, van Helden J, Løngreen P, Brunak S. The bio.tools registry of software tools and data resources for the life sciences. Genome Biology. 2019;20(1):1–4. doi: 10.1186/s13059-019-1772-6. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Ison et al. (2021). Ison J, Ienasescu H, Rydza E, Chmura P, Rapacki K, Gaignard A, Schwämmle V, van Helden J, Kalaš M, Ménager H. biotoolsSchema: a formalized schema for bioinformatics software description. GigaScience. 2021;10(1):giaa157. doi: 10.1093/gigascience/giaa157. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Janssen et al. (2008). Janssen MA, Alessa LN, Barton M, Bergin S, Lee A. Towards a community framework for agent-based modelling. Journal of Artificial Societies and Social Simulation. 2008;11(2):6. [ Google Scholar ]
- Jiménez et al. (2017). Jiménez RC, Kuzak M, Alhamdoosh M, Barker M, Batut B, Borg M, Capella-Gutierrez S, Chue Hong N, Cook M, Corpas M, Flannery M, Garcia L, Gelpí JL, Gladman S, Goble C, González Ferreiro M, Gonzalez-Beltran A, Griffin PC, Grüning B, Hagberg J, Holub P, Hooft R, Ison J, Katz DS, Leskošek B, López Gómez F, Oliveira LJ, Mellor D, Mosbergen R, Mulder N, Perez-Riverol Y, Pergl R, Pichler H, Pope B, Sanz F, Schneider MV, Stodden V, Suchecki R, Svobodová Vařeková R, Talvik H-A, Todorov I, Treloar A, Tyagi S, van Gompel M, Vaughan D, Via A, Wang X, Watson-Haigh NS, Crouch S. Four simple recommendations to encourage best practices in research software. F1000Research. 2017;6:876. doi: 10.12688/f1000research. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Jones et al. (2017). Jones MB, Boettiger C, Mayes AC, Slaughter P, Gil Y, Chue Hong N, Goble C. CodeMeta; 2017. [ Google Scholar ]
- Katz, Gruenpeter & Honeyman (2021). Katz DS, Gruenpeter M, Honeyman T. Taking a fresh look at FAIR for research software. F1000Research. 2021;2(3):100222. doi: 10.1016/j.patter.2021.100222. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Katz, Niemeyer & Smith (2018). Katz DS, Niemeyer KE, Smith AM. Publish your software: introducing the journal of open source software (JOSS) Computing in Science Engineering. 2018;20(3):84–88. doi: 10.1109/MCSE.2018.03221930. [ DOI ] [ Google Scholar ]
- Katz et al. (2016). Katz DS, Niemeyer KE, Smith AM, Anderson WL, Boettiger C, Hinsen K, Hooft R, Hucka M, Lee A, Löffler F, Pollard T, Rios F. Software vs. data in the context of citation. PeerJ Preprints. 2016;4:e2630v1. doi: 10.7287/peerj.preprints.2630v1. [ DOI ] [ Google Scholar ]
- Kern, Fehlmann & Keller (2020). Kern F, Fehlmann T, Keller A. On the lifetime of bioinformatics web services. Nucleic Acids Research. 2020;48(22):12523–12533. doi: 10.1093/nar/gkaa1125. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Lamprecht et al. (2020). Lamprecht A-L, Garcia L, Kuzak M, Martinez C, Arcila R, Martin Del Pico E, Dominguez Del Angel V, van de Sandt S, Ison J, Martinez PA, McQuilton P, Valencia A, Harrow J, Psomopoulos F, Gelpi JL, Chue Hong N, Goble C, Capella-Gutierrez S. Towards FAIR principles for research software. Data Science. 2020;3(1):37–59. doi: 10.3233/DS-190026. [ DOI ] [ Google Scholar ]
- Lin et al. (2020). Lin D, Crabtree J, Dillo I, Downs RR, Edmunds R, Giaretta D, De Giusti M, L’Hours H, Hugo W, Jenkyns R, Khodiyar V, Martone ME, Mokrane M, Navale V, Petters J, Sierman B, Sokolova DV, Stockhause M, Westbrook J. The TRUST principles for digital repositories. Scientific Data. 2020;7(1):144. doi: 10.1038/s41597-020-0486-7. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Merali (2010). Merali Z. Computational science: …error. Nature. 2010;467(7317):775–777. doi: 10.1038/467775a. [ DOI ] [ PubMed ] [ Google Scholar ]
- Ministère de l’Enseignement supérieur, de la Recherche et de l’Innovation (2021). Ministère de l’Enseignement supérieur, de la Recherche et de l’Innovation Second national plan for open science. 2021. https://www.ouvrirlascience.fr/second-national-plan-for-open-science/ https://www.ouvrirlascience.fr/second-national-plan-for-open-science/
- Momcheva & Tollerud (2015). Momcheva I, Tollerud E. Software use in astronomy: an informal survey. ArXiv preprint. 2015. [ DOI ]
- Morin et al. (2012). Morin A, Urban J, Adams PD, Foster I, Sali A, Baker D, Sliz P. Shining light into black boxes. Science. 2012;336(6078):159–160. doi: 10.1126/science.1218263. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Office of Science & Technology Policy (2016). Office of Science and Technology Policy Principles for promoting access to federal government-supported scientific data and research findings through international scientific cooperation. 2016. https://www.nesdisia.noaa.gov/docs/iwgodsp_principles_0.pdf https://www.nesdisia.noaa.gov/docs/iwgodsp_principles_0.pdf
- ORNL (2013). ORNL Oak ridge national laboratory distributed active archive center. 2013. https://daac.ornl.gov/ https://daac.ornl.gov/
- Peckham, Hutton & Norris (2013). Peckham SD, Hutton EWH, Norris B. A component-based approach to integrated modeling in the geosciences: the design of CSDMS. Computers and Geosciences. 2013;53(December (4)):3–12. doi: 10.1016/j.cageo.2012.04.002. [ DOI ] [ Google Scholar ]
- Peng (2011). Peng RD. Reproducible research in computational science. Science. 2011;334(6060):1226–1227. doi: 10.1126/science.1213847. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Serban et al. (2020). Serban A, van der Blom K, Hoos H, Visser J. Adoption and effects of software engineering best practices in machine learning. ESEM ’20: Proceedings of the 14th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM); New York: Association for Computing Machinery; 2020. [ Google Scholar ]
- Smith, Katz & Niemeyer (2016). Smith AM, Katz DS, Niemeyer KE. Software citation principles. PeerJ Computer Science. 2016;2(2):e86. doi: 10.7717/peerj-cs.86. [ DOI ] [ Google Scholar ]
- Soito & Hwang (2017). Soito L, Hwang LJ. Citations for software: providing identification, access and recognition for research software. International Journal of Digital Curation. 2017;11(2):48–63. doi: 10.2218/ijdc.v11i2.390. [ DOI ] [ Google Scholar ]
- Stodden et al. (2016). Stodden V, McNutt M, Bailey DH, Deelman E, Gil Y, Hanson B, Heroux MA, Ioannidis JPA, Taufer M. Enhancing reproducibility for computational methods. Science. 2016;354(6317):1240–1241. doi: 10.1126/science.aah6168. [ DOI ] [ PubMed ] [ Google Scholar ]
- Task Force on Best Practices for Software Registries et al. (2020). Task Force on Best Practices for Software Registries. Monteil A, Gonzalez-Beltran A, Ioannidis A, Allen A, Lee A, Bandrowski A, Wilson BE, Mecum B, Du CF, Robinson C, Garijo D, Katz DS, Long D, Milliken G, Ménager H, Hausman J, Spaaks JH, Fenlon K, Vanderbilt K, Hwang L, Davis L, Fenner M, Crusoe MR, Hucka M, Wu M, Chue Hong N, Teuben P, Stall S, Druskat S, Carnevale T, Morrell T. Nine best practices for research software registries and repositories: a concise guide. ArXiv preprint. 2020. [ DOI ]
- Thelwall & Kousha (2016). Thelwall M, Kousha K. Figshare: a universal repository for academic resource sharing? Online Information Review. 2016;40(3):333–346. doi: 10.1108/OIR-06-2015-0190. [ DOI ] [ Google Scholar ]
- Trisovic et al. (2020). Trisovic A, Durbin P, Schlatter T, Durand G, Barbosa S, Brooke D, Crosas M. Advancing computational reproducibility in the dataverse data repository platform. P-RECS ’20: Proceedings of the 3rd International Workshop on Practical Reproducible Evaluation of Computer Systems; 2020. pp. 15–20. [ Google Scholar ]
- Trisovic et al. (2022). Trisovic A, Lau MK, Pasquier T, Crosas M. A large-scale study on research code quality and execution. Scientific Data. 2022;9(1):60. doi: 10.1038/s41597-022-01143-6. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Veretnik, Fink & Bourne (2008). Veretnik S, Fink JL, Bourne PE. Computational biology resources lack persistence and usability. PLOS Computational Biology. 2008;4(7):e1000136. doi: 10.1371/journal.pcbi.1000136. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Weiner et al. (2009). Weiner B, Blanton MR, Coil AL, Cooper MC, Davé R, Hogg DW, Holden BP, Jonsson P, Kassin SA, Lotz JM, Moustakas J, Newman JA, Prochaska JX, Teuben PJ, Tremonti CA, Willmer CNA. Astronomical software wants to be free: a manifesto. Astro2010: The Astronomy and Astrophysics Decadal Survey; 2009. p. P61. [ Google Scholar ]
- Wilkinson et al. (2016). Wilkinson MD, Dumontier M, Aalbersberg IJJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, da Silva SLB, Bourne PE, Bouwman J, Brookes AJ, Clark T, Crosas M, Dillo I, Dumon O, Edmunds S, Evelo CT, Finkers R, Gonzalez-Beltran A, Gray AJG, Groth P, Goble C, Grethe JS, Heringa J, ’t Hoen PAC, Hooft R, Kuhn T, Kok R, Kok J, Lusher SJ, Martone ME, Mons A, Packer AL, Persson B, Rocca-Serra P, Roos M, van Schaik R, Sansone S-A, Schultes E, Sengstag T, Slater T, Strawn G, Swertz MA, Thompson M, van der LJ, van Mulligen E, Velterop J, Waagmeester A, Wittenburg P, Wolstencroft K, Zhao J, Mons B. The FAIR guiding principles for scientific data management and stewardship. Scientific Data. 2016;3(1):160018. doi: 10.1038/sdata.2016.18. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Wilson et al. (2014). Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P. Best practices for scientific computing. PLOS Biology. 2014;12(1):e1001745. doi: 10.1371/journal.pbio.1001745. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Yakel et al. (2013). Yakel E, Faniel IM, Kriesberg A, Yoon A. Trust in digital repositories. International Journal of Digital Curation. 2013;8(1):143–156. doi: 10.2218/ijdc.v8i1.251. [ DOI ] [ Google Scholar ]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
- View on publisher site
- PDF (1.7 MB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
IMAGES
VIDEO
COMMENTS
In this guide, we’ll break down the specifics of user research repositories, some best practices and the benefits of building your own research library, plus how to get started, and our favorite examples of robust research repositories.
We believe that a thoughtfully considered user research repository is critical to mitigating this pain and unlocking the potential of customer research in organizations worldwide.
Research repositories have the potential to be incredibly powerful assets for any research-driven organisation. But when it comes to building one, it can be difficult to know where to start. In this post, we provide some practical tips to define a clear vision and strategy for your repository.
Research Repositories in Action: 6 Best Practices. Best Practice 1 – Define a Clear Taxonomy. Best Practice 2 – Implement Version Control. Best Practice 3 – Secure Data Access. Best Practice 4 – Foster Collaboration. Best Practice 5 – Document Thoroughly. Best Practice 6 – Data Retrieval and Searchability. In Conclusion.
In this guide, we’ll cover all things research repositories – from definitions, benefits, and tools, to tips for building a healthy repository that enables your company to successfully conduct, organize, store, and access research.
Discover the power of research repositories and how they can help you streamline your data collection process, optimize collaboration, and improve your overall research experience.
A research repository is a searchable, taggable, and trackable place to house your org's UX research insights. Check out our top considerations and advice for building one in your company.
Research repositories store and organize UX research, making research insights widely available and easy to consume throughout an organization. When creating a research repository, research available tools, gain feedback from researchers and teams who would use it, and plan to iterate after launch.
The fifth edition of the ISBER Best Practices: Recommendations for Repositories reflects the substantial contributions of and reviews by repository professionals from diverse organizations worldwide in response to invaluable feedback provided by users of the previous editions. Representing the spectrum
In this article, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development.