Threatened Species Initiative Data Policy (v1.0 June 2020)

1. Introduction

The Bioplatforms Australia (Bioplatforms)-sponsored Threatened Species Initiative is generating a resource consisting of reference datasets of core genome, population genomic and transcript data for key Australian threatened species, to support national research efforts in genomics, evolution and conservation.

The consortium reserves the right to conduct ‘global analyses’ across these reference genomes, transcriptomes and population genomic data and publish the results in the scientific literature. However, in accordance with the Bermuda and Fort Lauderdale agreements and the more recent Toronto Statement, which provide guidelines for scientific data sharing, Bioplatforms are committed to ensuring that data produced in this effort are shared at appropriate times and with as few restrictions as possible. This advances scientific discovery and maximizes the value to the community from this Australian Government National Collaborative Research Infrastructure Strategy (NCRIS)-funded dataset.

This policy describes the data associated with the consortium, the roles and responsibilities of various consortium members and data users, as well as release schedules and communications/publications expectations.

2. Reference Dataset Description and overall data/information flow

The datasets to be produced by the consortium will include, but are not limited to, the following 2 areas:

  1. Reference genome (including transcriptome data)
  2. Population genomics

Consortium members will determine the experimental design for each of the study areas above. DNA and/or RNA will be extracted by consortium members and genomic data will be produced from several Bioplatforms network data generation facilities (e.g. Ramaciotti Centre for Genomics, Sydney, Australian Genome Research Facility (AGRF), Brisbane, ACRF Biomolecular Resource Facility (BRF), Canberra)

Following production, raw data will be uploaded to a password-secured central data repository held at Amazon Web Services (AWS), and managed by the Queensland Cyber Infrastructure Foundation (QCIF, University of Queensland, Brisbane) on behalf of Bioplatforms Australia. To enable recovery in case of disaster, all data in the AWS repository will be mirrored at a second site in Brisbane that is managed by QCIF.

Metadata associated with each file and files names will be made publicly available via a web portal and associated Application Programming Interface (API), which is managed by QCIF for Bioplatforms. These will include metadata relating to the origin of each sample analysed and methods used for the extraction of DNA/RNA, preparation of sequencing libraries and the generation of sequence data. Access to the data files via the web portal and API will be restricted to authorised users (as defined below) and will require authentication through password use.

The data will be licensed for use under a Creative Commons Attribution License (CC BY 4.0) with the appropriate acknowledgement as defined in our Communications policy.

Sensitive data or metadata (such as GPS coordinates of rare and threatened species) will be handled using the approach applied by the Sensitive Data Service developed by the Atlas of Living Australia.

If determined necessary by Bioplatforms, in consultation with various research champions, copies of the intermediate and analysed data may also be stored elsewhere. As noted above, when this option is executed, access to any copies of the data and metadata must be controlled under identical conditions as required for the primary copies.

Raw data will be shared with consortium researchers and bioinformaticians.

Where appropriate intermediate and/or analysed data will be uploaded by bioinformaticians to the secure central data repository in Brisbane as described above. Data downloads and API access will be provided under the same set-up as for the raw data.

Ultimately, the data generated in this project will be made available under open-access conditions to the national/international research community, through a variety of relevant established international data repositories such as the European Nucleotide Archive (ENA) (See also Section 4 – Data sharing schedule). Furthermore, data generated in this project will be made available to conservation managers in a format that is fit-for-purpose with appropriate guidance on how to analyse and use data outputs.

3. Roles and Responsibilities

3.1 Data Initiator(s)

Research champions, listed in Section 5, will assess the data request in consultation with Bioplatforms Australia and are responsible for:

  • Outlining the scope of work and agreeing upon the analysis in consultation with Bioplatforms facilities;
  • Providing the metadata information relevant for each piece of work.
  • Consultation with Bioplatforms for the tasks outlined in Section 3.2.

3.2 Data Sponsor

Bioplatforms Australia, as the Data Sponsor, undertakes the overall duties of ownership, and is responsible for the following tasks (in consultation with various research champions):

  • Defining the purpose of the data items;
  • Defining access arrangements;
  • Authorising any Data Users;
  • Appointing a Data Custodian for copies of the data stored at various sites/on various systems.

3.3 Data Producer(s)

Two broad types of data will be produced: raw and processed. Raw and processed data will be produced from the facilities listed in Section 2.

Producers of both raw and processed data are responsible for:

  • Assigning a Data Custodian for copies of the data stored locally;
  • Data generation and temporary storage;
  • Ensuring data use is compliant with this policy;
  • Quality assurance.

3.4 Data Infrastructure Provider(s)

Data infrastructure providers provide data storage and/or compute infrastructure for the raw or processed data, and are responsible for:

  • Assigning a Data Custodian for copies of the data stored locally.

3.5 Data Custodian(s)

The Data Custodian undertakes the day-to-day management of each item of data stored at various sites and/or on various systems, and is responsible for:

  • Data storage and disposal on that system;
  • Ensuring data use is compliant with this and other policies/agreements;
  • Providing access to Data Users that have been authorised by the Data Sponsor;
  • Ensuring that any Data User who is given access to the data is aware of any data use policies (including this Policy) and their responsibilities.

3.6 Data Users

Data users include all end-users of the raw or processed data generated by the consortium. These comprise consortium researchers, any collaborators, training dataset users and any other approved members of the international research community.

The Data User is any party who has been granted access, by a Data Custodian, to any item of data. They are responsible for:

  • Requesting authorisation from the Data Sponsor;
  • Requesting access from the Data Custodian;
  • Using and safeguarding information according to the conditions stipulated by the Data Sponsor and/or Custodian – including observing any relevant ethics approvals, legislation, data use policies (including this Policy and other relevant data use policies imposed by the Data Owner) and their responsibilities.

Table 2: Group membership and details of their roles within the consortium

Consortium member Someone who has a contributed meaningfully to the science and/or management of the initiative, such as through active involvement in project development, working groups and panels, or contribution of samples
Data Sponsor Bioplatforms Australia
Research Champions Steering committee members in consultation with working groups, where appropriate
Data Producers (raw) Ramaciotti Centre for Genomics, Sydney Australian Genome Research Facility (AGRF), Melbourne ACRF Biomolecular Resource Facility (BRF), ANU, Canberra Note: may vary over the course of the initiative.
Data Infrastructure Providers Queensland Cyber Infrastructure Foundation (QCIF), Brisbane Amazon Web Services (AWS) Open Data Sets Program
Data Custodians All groups above are required to appoint a designated Data Custodian to ensure data assets generated throughout this project are managed according to the requirements of this policy

4. Data Sharing Schedule

4.1 Data Sharing Schedule

Various data types will be made available, at appropriate times, throughout the multistep process of generating, processing, assembling, annotating and dispersing the reference datasets.

Broadly, this will fall into two phases: a “mediated-access” phase, where access to the data will be limited to members of the consortium and other authorised parties; and an “open-access” phase where the data will be made openly available on the Bioplatforms Data Portal and other resources including International Data Repositories.

During the “mediated-access” phase, the process for gaining authorisation to access the data is to email data.access@bioplatforms.com with name, affiliation, specific data for which access is being requested and a brief outline of the intended data use. This information will be assessed by the Data Sponsor, Bioplatforms Australia, and the appropriate consortium research champion(s). If approved, Bioplatforms as the Data Sponsor will inform an appropriate Data Custodian to provide access. Data sharing and collaborative interactions are encouraged to advance scientific discovery and maximize the value to the community from this Australian Government (NCRIS)-funded dataset.

Table 3: Data Release Timescales:

Data Schedule for release of data to authorised users during the “Mediated-access” phase Schedule for public release of data – resulting in the “Open-access” phase
All Data sets Immediately following deposition of data into QCIF data repository 12 months from deposition of data into QCIF data repository

4.2 Data and metadata Retention/Persistence for items held in the Bioplatforms Data Repository

As noted in section 4.1 (Data Sharing schedule), it is the objective that all high-quality data (note that some data (e.g. from pilot studies or data that fails QC) will not be submitted to such discipline repositories) generated in this initiative, will be made publicly available. The preferred method for public release will be through deposition in an appropriate discipline repository (e.g. an ELIXIR Core Data Resource or ELIXIR Deposition Database – all of which are intended for the long-term preservation of biological data for a global audience).

4.2.1 Retention: Regardless of whether data was submitted to an appropriate discipline repository or not, Bioplatforms will ensure that all data and metadata submitted as part of this initiative to the Bioplatforms Data Repository will be retained for the lifetime of the repository. This is currently defined by the operational horizon of Bioplatforms, which is currently the next 5 years at least.

4.2.2. Functional preservation: Bioplatforms makes no promises of usability and understandability of deposited objects over time.

4.2.3. Authenticity: All data files are stored along with a MD5 checksum of the file content. This may be used for assessing the integrity of data items stored.

4.2.4. Succession plans: In case of closure of the Bioplatforms Data repository, best efforts will be made to integrate all content into suitable alternative repositories.

5. Communications expectations

All communications (scientific or general publications and presentations) that arise from the consortium’s work will appropriately acknowledge the input of all relevant contributions. The expectations are detailed in the Consortium Communications Policy.