Difference: PurchaseStorage (1 vs. 25)

Revision 2506 Nov 2018 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 9 to 9
 

Revision 2409 May 2018 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 9 to 9
 

Revision 2318 Aug 2016 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 8 to 8
 

Revision 2215 Aug 2016 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 30 to 30
 Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving. The actual price varies slightly depending on the current cost of disks - it can go up or down). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes ( see below for details).
Changed:
<
<
Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
>
>
Performance Performance - the storage is on direct-attached SATA disks, with an SSD read cache, served from 10Gbps networked servers.
 Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Changed:
<
<
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based. The system is behind the institutional firewall and has normal password security applied.
>
>
Security Security - the servers are housed in secure Faculty or University data centers and are rack-based. The system is behind the institutional firewall and has normal password security applied.
 Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.

A Managed Storage System

Changed:
<
<
>
>
Calleo Server
 The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:
  • purchasing the raw hardware
  • setting up filesystems and making them available on the network
Line: 51 to 51
 

Data Security

Changed:
<
<
The University Policy on safeguarding data should be applied when making decisions on data security. The Faculty system is behind the institutional firewall and has normal password protection. Data encryption is not applied.
>
>
The University Policy on safeguarding data should be applied when making decisions on data security. The Faculty servers are housed in secure Faculty or University data centers and are rack-based. The data centers have features such as intruder alarms, fire suppression, water detection, etc. The system is behind the institutional firewall and has normal password security applied. Data encryption is not applied by default.
 

Funding and Data Lifetime

Changed:
<
<
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. The cost to research groups changes with disk prices, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories - that may be influenced by future University systems. Research groups should consider the long-term archive requirements of their data).
>
>
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. The cost to research groups changes with disk prices, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories - that may be influenced by future University systems. Research groups should consider the long-term archive requirements of their data.
  Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk). Note that requests for storage up to about 50TB can follow the funding model above, but anything larger will require individual discussion with the Faculty IT team.
Line: 97 to 97
 
META FILEATTACHMENT attachment="disk.jpeg" attr="h" comment="" date="1427213258" name="disk.jpeg" path="disk.jpeg" size="1900" user="ear6stb" version="1"
META FILEATTACHMENT attachment="tape.png" attr="h" comment="" date="1427215174" name="tape.png" path="tape.png" size="1707" user="ear6stb" version="1"
META FILEATTACHMENT attachment="performance.png" attr="h" comment="" date="1427215614" name="performance.png" path="performance.png" size="4062" user="ear6stb" version="1"
Added:
>
>
META FILEATTACHMENT attachment="transtec.jpg" attr="h" comment="Calleo server" date="1471267167" name="transtec.jpg" path="transtec.jpg" size="370885" user="ear6stb" version="1"

Revision 2115 Aug 2016 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 26 to 26
 
<--===== PAGE TEXT ======================================-->

Summary of main Features

Changed:
<
<
Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving. The actual price varies slightly depending on the current cost of disks - it can go up or down). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes (see below for details).
Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based. The system is behind the institutional firewall and has normal password security applied.
Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
>
>
Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving. The actual price varies slightly depending on the current cost of disks - it can go up or down). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes ( see below for details).
Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based. The system is behind the institutional firewall and has normal password security applied.
Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
 

A Managed Storage System

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

Line: 50 to 51
 

Data Security

Changed:
<
<
The University Policy on safeguarding data should be applied when making decisions on data security. The Faculty system is behind the institutional firewall and has normal password protection. Data encryption is not applied.
>
>
The University Policy on safeguarding data should be applied when making decisions on data security. The Faculty system is behind the institutional firewall and has normal password protection. Data encryption is not applied.
 

Funding and Data Lifetime

Line: 77 to 78
  style="height: 300;" contentstyle="height: 100%; overflow: hidden;" }%
Changed:
<
<
Summary of main Features
A Managed Storage System
Enterprise vs Non-Enterprise
Data Security
Funding and Data Lifetime
Partition size
Backup Policy
>
>
Summary of main Features
A Managed Storage System
Enterprise vs Non-Enterprise
Data Security
Funding and Data Lifetime
Partition size
Backup Policy
 
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== IMAGE ===========================================-->

Storage

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->

Revision 2026 Apr 2016 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 18 to 18
 
  • Set T1 = Summary of main Features
  • Set T2 = A Managed Storage System
  • Set T3 = Enterprise vs Non-Enterprise
Changed:
<
<
  • Set T4 = Funding and Data Lifetime
  • Set T5 = Partition size
  • Set T6 = Backup Policy
>
>
  • Set T4 = Data Security
  • Set T5 = Funding and Data Lifetime
  • Set T6 = Partition size
  • Set T7 = Backup Policy
 -->

<--===== PAGE TEXT ======================================-->

Summary of main Features

Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving. The actual price varies slightly depending on the current cost of disks - it can go up or down). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Changed:
<
<
Backups Backups - a single mirror with 30 days of changes (see below for details).
>
>
Backups Backups - a single mirror with 30 days of changes (see below for details).
 Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Changed:
<
<
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based.
>
>
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based. The system is behind the institutional firewall and has normal password security applied.
 Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.

A Managed Storage System

Line: 45 to 46
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on which type of system to use for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The University Policy on safeguarding data should be applied when making decisions on which type of system to use for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
 

Data Security

Added:
>
>
The University Policy on safeguarding data should be applied when making decisions on data security. The Faculty system is behind the institutional firewall and has normal password protection. Data encryption is not applied.

Funding and Data Lifetime

 The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. The cost to research groups changes with disk prices, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories - that may be influenced by future University systems. Research groups should consider the long-term archive requirements of their data).

Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk). Note that requests for storage up to about 50TB can follow the funding model above, but anything larger will require individual discussion with the Faculty IT team.

Changed:
<
<

Funding and Data Lifetime

>
>

Partition size

  Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty sells to projects is 2TB - although smaller partitions may be possible (contact IT support). The maximum size of a single partition is currently around 290TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).
Changed:
<
<

Partition size

>
>

Backup Policy

  The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the data (with reference to the Policy on safeguarding data).
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per TB for 5 years.
Line: 78 to 83
 Data Security
Funding and Data Lifetime
Partition size
Added:
>
>
Backup Policy
 
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== IMAGE ===========================================-->

Storage

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->

Revision 1926 Jan 2016 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 25 to 25
 
<--===== PAGE TEXT ======================================-->

Summary of main Features

Changed:
<
<
Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving). 60 per TB is the price to use on grant applications.
>
>
Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving. The actual price varies slightly depending on the current cost of disks - it can go up or down). 60 per TB is the price to use on grant applications.
 Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes (see below for details).
Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.

Revision 1812 Jan 2016 - earrr

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 12 to 12
 
  • Set Fullcost = 100
  • Set Scratch = 57
  • Set Minsize = 2
Changed:
<
<
  • Set Maxsize = 150
>
>
  • Set Maxsize = 50
 
  • Set Maxpartition = 180
  • Set Unit = TB
  • Set T1 = Summary of main Features

Revision 1708 Dec 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 9 to 9
 

Revision 1623 Apr 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 54 to 54
 

Funding and Data Lifetime

Changed:
<
<
Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty re-sells to projects is 1TB - although smaller partitions may be possible. The maximum size of a single partition is currently around 290TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).
>
>
Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty sells to projects is 1TB - although smaller partitions may be possible. The maximum size of a single partition is currently around 290TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).
 

Partition size

The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the data (with reference to the Policy on safeguarding data).

Revision 1525 Mar 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 7 to 7
 
<--===== PAGE TITLE ======================================-->

Revision 1425 Mar 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 14 to 14
 
  • Set Maxsize = 150
  • Set Maxpartition = 120
  • Set Unit = TB
Added:
>
>
  • Set T1 = Summary of main Features
  • Set T2 = A Managed Storage System
  • Set T3 = Enterprise vs Non-Enterprise
  • Set T4 = Funding and Data Lifetime
  • Set T5 = Partition size
  • Set T6 = Backup Policy
 -->

<--===== PAGE TEXT ======================================-->
Changed:
<
<

Summary of main Features

>
>

Summary of main Features

 Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes (see below for details).
Line: 26 to 31
 Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based.
Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
Changed:
<
<

A Managed Storage System

>
>

A Managed Storage System

  The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:
  • purchasing the raw hardware
Line: 37 to 42
 
  • maintaining the servers
  • etc.
Changed:
<
<

Enterprise vs Non-Enterprise

>
>

Enterprise vs Non-Enterprise

  There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on which type of system to use for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
Changed:
<
<

Funding and Data Lifetime

>
>

Data Security

  The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. The cost to research groups changes with disk prices, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories - that may be influenced by future University systems. Research groups should consider the long-term archive requirements of their data).

Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk). Note that requests for storage up to about 50TB can follow the funding model above, but anything larger will require individual discussion with the Faculty IT team.

Changed:
<
<

Partition size

>
>

Funding and Data Lifetime

  Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty re-sells to projects is 1TB - although smaller partitions may be possible. The maximum size of a single partition is currently around 290TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).
Changed:
<
<

Backup Policy

>
>

Partition size

  The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the data (with reference to the Policy on safeguarding data).
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per TB for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes are kept for at least 30 days, but are kept for longer if disk capacity on the backups allows. This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed for at least the last 30 days can be restored. We also keep a third copy of data off-line for most filesystems which provides some extra protection - but we don't define how often that copy is taken - it varies from filesystem to filesystem. For this level of data protection, we charge 60 per TB for 5 years.

<--===== END CONTENT: ===================================-->
Added:
>
>

<--===== WEB LINKS ========================================-->

 
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== IMAGE ===========================================-->

Storage

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->

Revision 1324 Mar 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 7 to 7
 
<--===== PAGE TITLE ======================================-->

<--===== PAGE TEXT ======================================-->
Changed:
<
<

Faculty Storage

>
>

Summary of main Features

 
Changed:
<
<
The main features of the Faculty storage system are:
  • Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite read-only archiving). 60 per TB is the price to use on grant applications.
  • Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50 TBs can be accommodated on the current system).
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
  • Security - the servers are housed in secure Faculty or University data centres and are rack-based.
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.

Managed Storage

>
>
Pound Sign Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite, read-only archiving). 60 per TB is the price to use on grant applications.
Large volume Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50TBs are fine - contact IT support to discuss anything larger).
Backups Backups - a single mirror with 30 days of changes (see below for details).
Performance Performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
Disaster Recovery Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
Security Security - the servers are housed in secure Faculty or University data centres and are rack-based.
Network Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.

A Managed Storage System

  The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:
  • purchasing the raw hardware
Line: 62 to 61
 
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== IMAGE ===========================================-->

Storage

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->

<--===== END PAGE =======================================-->
Added:
>
>
META FILEATTACHMENT attachment="pound.jpeg" attr="h" comment="" date="1427210828" name="pound.jpeg" path="pound.jpeg" size="2319" user="ear6stb" version="1"
META FILEATTACHMENT attachment="disaster.jpeg" attr="h" comment="" date="1427212141" name="disaster.jpeg" path="disaster.jpeg" size="7865" user="ear6stb" version="1"
META FILEATTACHMENT attachment="security.jpeg" attr="h" comment="" date="1427212383" name="security.jpeg" path="security.jpeg" size="4609" user="ear6stb" version="1"
META FILEATTACHMENT attachment="network.jpeg" attr="h" comment="" date="1427212501" name="network.jpeg" path="network.jpeg" size="5375" user="ear6stb" version="1"
META FILEATTACHMENT attachment="disk.jpeg" attr="h" comment="" date="1427213258" name="disk.jpeg" path="disk.jpeg" size="1900" user="ear6stb" version="1"
META FILEATTACHMENT attachment="tape.png" attr="h" comment="" date="1427215174" name="tape.png" path="tape.png" size="1707" user="ear6stb" version="1"
META FILEATTACHMENT attachment="performance.png" attr="h" comment="" date="1427215614" name="performance.png" path="performance.png" size="4062" user="ear6stb" version="1"

Revision 1224 Mar 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 11 to 11
 
  • Set INTERNALLINKDOMAINS = it.leeds.ac.uk
  • Set Fullcost = 95
  • Set Scratch = 57
Changed:
<
<
  • Set Maxsize = 200
>
>
  • Set Maxsize = 150
 
  • Set Maxpartition = 120
  • Set Unit = TB
-->
Line: 20 to 20
 

Faculty Storage

The main features of the Faculty storage system are:

Changed:
<
<
  • Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite read-only archiving). 60 per TB is the price to use on grant applications.
  • Large-volume (TBs to hundreds of TBs) capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in secure Faculty or University data centres and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere
>
>
  • Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite read-only archiving). 60 per TB is the price to use on grant applications.
  • Large-volume capacity on a managed system. (Individual research groups use single TBs to hundreds of TBs. Individual requests up to about 50 TBs can be accommodated on the current system).
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers.
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks).
  • Security - the servers are housed in secure Faculty or University data centres and are rack-based.
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
 

Managed Storage

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

Line: 44 to 44
 

Funding and Data Lifetime

Changed:
<
<
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. Obviously the cost to research groups changes from time to time, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories. This could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups should consider long-term archive requirements).
>
>
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. The cost to research groups changes with disk prices, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories - that may be influenced by future University systems. Research groups should consider the long-term archive requirements of their data).
  Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk). Note that requests for storage up to about 50TB can follow the funding model above, but anything larger will require individual discussion with the Faculty IT team.

Revision 1124 Mar 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 17 to 17
 -->

<--===== PAGE TEXT ======================================-->
Added:
>
>

Faculty Storage

The main features of the Faculty storage system are:

  • Relatively cheap (about 60 per TB for 5 years read/write access followed by indefinite read-only archiving). 60 per TB is the price to use on grant applications.
  • Large-volume (TBs to hundreds of TBs) capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks, with an SSD read cache, served from 10Gbps networked servers
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in secure Faculty or University data centres and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere
 

Managed Storage

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

Line: 31 to 41
 

Enterprise vs Non-Enterprise

There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on which type of system to use for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).

Deleted:
<
<

Faculty Storage

 
Deleted:
<
<
The main features of the Faculty storage system are:
  • Relatively cheap (about 60 per TB)
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gbps networked servers
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere
 

Funding and Data Lifetime

The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. Obviously the cost to research groups changes from time to time, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories. This could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups should consider long-term archive requirements).

Revision 1006 Feb 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 30 to 30
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on which type of system to use for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
 

Faculty Storage

The main features of the Faculty storage system are:

Line: 52 to 52
 Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty re-sells to projects is 1TB - although smaller partitions may be possible. The maximum size of a single partition is currently around 290 TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).

Backup Policy

Changed:
<
<
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data).
>
>
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the data (with reference to the Policy on safeguarding data).
 
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per TB for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes are kept for at least 30 days, but are kept for longer if disk capacity on the backups allows. This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed for at least the last 30 days can be restored. We also keep a third copy of data off-line for most filesystems which provides some extra protection - but we don't define how often that copy is taken - it varies from filesystem to filesystem. For this level of data protection, we charge 60 per TB for 5 years.

Revision 906 Feb 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 11 to 11
 
  • Set INTERNALLINKDOMAINS = it.leeds.ac.uk
  • Set Fullcost = 95
  • Set Scratch = 57
Added:
>
>
  • Set Maxsize = 200
  • Set Maxpartition = 120
  • Set Unit = TB
 -->

<--===== PAGE TEXT ======================================-->
Line: 27 to 30
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system. Enterprise storage for research data is available on the University N Drive - but this is relatively small volume (gigabytes rather than terabytes).
 

Faculty Storage

The main features of the Faculty storage system are:

Changed:
<
<
  • Relatively cheap (about 60 per Tb)
>
>
  • Relatively cheap (about 60 per TB)
 
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
Changed:
<
<
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers
>
>
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gbps networked servers
 
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere

Funding and Data Lifetime

Changed:
<
<
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 60 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).
>
>
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers as part of the central Faculty infrastructure, and research groups purchase the disks for these servers (including mirrors, backups, redundancy, etc) from grants. Obviously the cost to research groups changes from time to time, but is currently around 60 per TB (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has been set on the lifetime of the repositories. This could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups should consider long-term archive requirements).
 
Changed:
<
<
Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk).
>
>
Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk). Note that requests for storage up to about 50 TB can follow the funding model above, but anything larger will require individual discussion with the Faculty IT team.
 

Partition size

Changed:
<
<
Each server in the system has a RAID data array which is split into partitions. The minimum preferred size which the Faculty re-sells to projects is 1Tb - although smaller partitions may be possible. The maximum size of a single partition is currently around 120Tb - this is set by the maximum size of the data array on a single server (so it could increase in the future).
>
>
Each server in the system has a RAID data array which is split into group quotas. The minimum preferred size which the Faculty re-sells to projects is 1TB - although smaller partitions may be possible. The maximum size of a single partition is currently around 290 TB - this is set by the maximum size of the data array on a single server (so it could increase in the future).
 

Backup Policy

The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data).

Changed:
<
<
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 60 per Tb for 5 years.
>
>
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per TB for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes are kept for at least 30 days, but are kept for longer if disk capacity on the backups allows. This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed for at least the last 30 days can be restored. We also keep a third copy of data off-line for most filesystems which provides some extra protection - but we don't define how often that copy is taken - it varies from filesystem to filesystem. For this level of data protection, we charge 60 per TB for 5 years.
 
<--===== END CONTENT: ===================================-->

Revision 826 Jan 2015 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 9 to 9
 

<--===== PAGE TEXT ======================================-->
Line: 29 to 31
 

Faculty Storage

The main features of the Faculty storage system are:

Changed:
<
<
  • Relatively cheap (about 120 per Tb)
>
>
  • Relatively cheap (about 60 per Tb)
 
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers
Line: 38 to 40
 
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere

Funding and Data Lifetime

Changed:
<
<
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).
>
>
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 60 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).
  Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk).
Line: 48 to 50
 

Backup Policy

The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data).

Changed:
<
<
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 per Tb for 5 years.
>
>
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 46 per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 60 per Tb for 5 years.
 
<--===== END CONTENT: ===================================-->

Revision 708 Sep 2014 - RichardBettie1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 25 to 25
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc.. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
 

Faculty Storage

The main features of the Faculty storage system are:

Line: 48 to 48
 

Backup Policy

The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data).

Changed:
<
<
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 pounds per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 pounds per Tb for 5 years.
>
>
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 per Tb for 5 years.
 
<--===== END CONTENT: ===================================-->

Revision 604 Sep 2014 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 15 to 15
 

Managed Storage

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

Changed:
<
<
  • purchasing the raw hardware
  • setting up filesystems and making them available on the network
  • backing up the filesystems
  • monitoring and replacing faulty hard drives
  • resizing filesystems if more space is required in the future
  • maintaining the servers
>
>
  • purchasing the raw hardware
  • setting up filesystems and making them available on the network
  • backing up the filesystems
  • monitoring and replacing faulty hard drives
  • resizing filesystems if more space is required in the future
  • maintaining the servers
 
  • etc.

Enterprise vs Non-Enterprise

Line: 29 to 29
 

Faculty Storage

The main features of the Faculty storage system are:

Changed:
<
<
  • Relatively cheap (about 120 per Tb)
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers.
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
>
>
  • Relatively cheap (about 120 per Tb)
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere
 

Funding and Data Lifetime

The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).

Revision 504 Sep 2014 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 8 to 8
 

<--===== PAGE TEXT ======================================-->
Line: 24 to 25
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The University Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
 

Faculty Storage

The main features of the Faculty storage system are:

Line: 34 to 35
 
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers.
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
Added:
>
>
  • Accessible as a network drive from all Faculty Windows and Linux desktops, and from Desktop Anywhere.
 

Funding and Data Lifetime

The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).

Line: 45 to 47
 Each server in the system has a RAID data array which is split into partitions. The minimum preferred size which the Faculty re-sells to projects is 1Tb - although smaller partitions may be possible. The maximum size of a single partition is currently around 120Tb - this is set by the maximum size of the data array on a single server (so it could increase in the future).

Backup Policy

Changed:
<
<
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data
>
>
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data).
 
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 pounds per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 pounds per Tb for 5 years.

<--===== END CONTENT: ===================================-->
Changed:
<
<
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->
>
>
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== IMAGE ===========================================-->

Storage

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->
 
<--===== END PAGE =======================================-->
\ No newline at end of file

Revision 403 Sep 2014 - StuartBorthwick1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 31 to 31
 
  • Relatively cheap (about 120 per Tb)
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
Changed:
<
<
  • Limited performance - the storage is on direct-attached, SATA disks served from 1Gps networked servers
>
>
  • Limited performance - the storage is on direct-attached, SATA disks served from 10Gps networked servers.
 
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
Changed:
<
<
  • Security - the servers are housed in locked Faculty server rooms and are rack-based
>
>
  • Security - the servers are housed in locked Faculty or University server rooms and are rack-based
 

Funding and Data Lifetime

The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).

Revision 303 Sep 2014 - RichardBettie1

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Line: 11 to 11
 -->

<--===== PAGE TEXT ======================================-->
Deleted:
<
<
Details of the Faculty Storage systen
 

Managed Storage

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

Line: 27 to 24
 

Enterprise vs Non-Enterprise

Changed:
<
<
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The Policy on safeguerding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
>
>
There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The Policy on safeguarding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.
 

Faculty Storage

The main features of the Faculty storage system are:

Line: 40 to 36
 
  • Security - the servers are housed in locked Faculty server rooms and are rack-based

Funding and Data Lifetime

Changed:
<
<
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).
>
>
The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note, that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).
 
Changed:
<
<
Storage space can be purchasing by contacting Faculty IT staff ( foe-support@leeds.ac.uk).
>
>
Storage space can be purchased by contacting Faculty IT staff ( foe-support@leeds.ac.uk).
 

Partition size

Added:
>
>
 Each server in the system has a RAID data array which is split into partitions. The minimum preferred size which the Faculty re-sells to projects is 1Tb - although smaller partitions may be possible. The maximum size of a single partition is currently around 120Tb - this is set by the maximum size of the data array on a single server (so it could increase in the future).

Backup Policy

Changed:
<
<
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data
>
>
The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data
 
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 pounds per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 pounds per Tb for 5 years.

Revision 218 Aug 2014 - issdjan

Line: 1 to 1
 
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->
Changed:
<
<
>
>
<--===== START HEADER ====================================-->

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE ===================================-->

Faculty Disk-Based Storage System

<--===== CONTENT: =======================================-->

<--===== END HEADER ====================================-->
 
Changed:
<
<
<--===== BANNER IMAGE =========================================-->
>
>
<--===== PAGE TITLE ======================================-->
 
Changed:
<
<
Purchasing Faculty Storage

Details of the Faculty Storage systems
>
>
<-- 
  • Set THISPAGETITLE = Purchasing Faculty Storage
-->
 
Changed:
<
<
<--===== CONTENT: =========-->
>
>
<--===== PAGE TEXT ======================================-->
 
Changed:
<
<

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->
>
>
Details of the Faculty Storage systen
 

Managed Storage

Line: 61 to 51
 We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 pounds per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 pounds per Tb for 5 years.
Deleted:
<
<
<--===== END CONTENT: =========-->

<--===== WEB LINKS ============================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

 
Changed:
<
<
>
>
<--===== END CONTENT: ===================================-->
 
Changed:
<
<
<--===== END PAGE ============================================-->
>
>
<--===== START FOOTER ====================================-->

<--===== WEB LINKS ========================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== END PAGE =======================================-->

<--===== END FOOTER ====================================-->
 
Changed:
<
<
>
>
<--===== END PAGE =======================================-->

Revision 111 Jul 2014 - StuartBorthwick1

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="ResearchData"
<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

Purchasing Faculty Storage

Details of the Faculty Storage systems

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

Managed Storage

The Faculty offers a managed storage system for large volume research data. The system is classed as manged because Faculty IT carries out all the background activities associated with data storage:

  • purchasing the raw hardware
  • setting up filesystems and making them available on the network
  • backing up the filesystems
  • monitoring and replacing faulty hard drives
  • resizing filesystems if more space is required in the future
  • maintaining the servers
  • etc.

Enterprise vs Non-Enterprise

There are different classes of managed storage systems - scratch, Enterprise, non-Enterprise, etc. The Policy on safeguerding data should be applied when making decisions on whether to use Enterprise or non-Enterprise systems for storing data. Enterprise systems feature criteria such as performance, resilience, high availability and comprehensive backups. The Faculty system is a managed, non-Enterprise system.

Faculty Storage

The main features of the Faculty storage system are:

  • Relatively cheap (about 120 per Tb)
  • Large-volume capacity on a managed system
  • Limited backups - a single mirror with 30 days of changes (see below for details)
  • Limited performance - the storage is on direct-attached, SATA disks served from 1Gps networked servers
  • Disaster recovery - in the event of a major incident, it could take some time to restore all data from backups (possibly several weeks)
  • Security - the servers are housed in locked Faculty server rooms and are rack-based

Funding and Data Lifetime

The funding for the Faculty storage system comes from individual research grants - the Faculty buys and maintains large servers (to achieve economies of scale) and passes on the exact cost per Tb of the whole system (including backups) to research groups. Obviously this changes from time to time, but is currently around 120 per Tb (depending on the backup policy - see below) for 5 years. (After 5 years, data on Faculty storage systems will move to read-only data repositories which are maintained as part of the over-all Faculty system. No time limit has currently been set on the lifetime of the repositories. Note that this could be an issue because some funding bodies require data to be stored for at least 10 years - so at the time of purchase, research groups may want to request 10 years full data storage at double the price quoted above).

Storage space can be purchasing by contacting Faculty IT staff ( foe-support@leeds.ac.uk).

Partition size

Each server in the system has a RAID data array which is split into partitions. The minimum preferred size which the Faculty re-sells to projects is 1Tb - although smaller partitions may be possible. The maximum size of a single partition is currently around 120Tb - this is set by the maximum size of the data array on a single server (so it could increase in the future).

Backup Policy

The RAID arrays are constantly monitored and can recover from individual disk failures. Current systems run RAID6 - so at least 3 disks have to fail simultaneously before a filesystem is lost. We run 2 levels of backup - the correct level should be decided for each filesystem by the PI responsible for the associated data (with reference to the Policy on safeguarding data
  1. Scratch space - In this case, the live data is the only copy. There are no backups at all, and there's no possibility of recovering data from failed filesystems. Filesystems in scratch areas will always have the word scratch in their name. There's a possibility of losing data via user errors (if a user accidentally deletes or overwrites files), there's also a possibility of losing entire filesystems if enough disks in the live RAID array fail simultaneously, or if the array is affected by fire, theft, etc. For this level of data protection, we charge 80 pounds per Tb for 5 years.
  2. Mirrored data with increments - In this case, the live data is mirrored to another RAID array in a separate fileserver in a separate server room. The mirror is synchronised overnight and all files which are changed or deleted during the synchronisation are kept. These incremental changes can be kept for either 7 days or 30 days (agreed between the PI and IT support, but the default is 30 days). This protects against disasters such as theft, flooding, etc in the server room (at the worst, 24 hours of work could be lost). It also protects against a critical number of disks failing in the RAID array. It also gives protection against user errors - files which were deleted or changed up to 30 days ago can be restored. For this level of data protection, we charge 120 pounds per Tb for 5 years.
<--===== END CONTENT: =========-->

<--===== WEB LINKS ============================================-->

Useful Links

<--===== START PAGE =====================================-->

<--===== BANNER IMAGE =========================================-->

<--===== CONTENT: =========-->

<--===== PAGE TITLE ============================================-->

<--===== PAGE TEXT ============================================-->

<--===== END CONTENT: =========-->

<--===== END PAGE ============================================-->

<--===== END PAGE ============================================-->

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright 2008-2014 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.