WP14 Common testing environments: amending the DoW

The idea is to use this page to amend the text for the WP description in the DoW. Please edit this wiki page directly, and use the "insert" and "delete" mark-up to show changes, as illustrated:

Here is some unchanged text.
<ins>Here is some text to be inserted.</ins>
<del>Here is some text to be deleted.</del>
Here is some more unchanged text

Start month End month WP leader
4 14 Tessella

Objectives

Collect together a set of environments to test the efficacy of tools and techniques for digital preservation, against changes in hardware, software, environment and knowledgebase of the Designated Communities, and designidentify new ones if necessary.

Description of work and role of partners

It has been said that it is easy to make claims about digital preservation but very hard to provide evidence about any specific tools and techniques. It is probably reasonable to expect that each proposed preservation technique works well against certain types of digital objects or certain challenges; it is unlikely that there is a universal test. Besides preservation efficacy one also needs to test against portability, interoperability, robustness and scalability.

We need testbed environment techniques which can tell us whether a proposed preservation technique works, and the zone of effectiveness with respect to type of objects that can be digitally preserved by that technique.

The CASPAR project adopted what it called accelerated lifetime testing by simulating changes in hardware, software, environment and the knowledge base of designated communities; using CASPAR techniques in well defined scenarios using many types of data from many disciplines it was claimed that this was solid evidence for the efficacy of the proposed CASPAR solution. Note that the CASPAR testbed is not a piece of software but rather a general approach within which other specific pieces of software can be tested.

Other projects have proposed different test beds and environments, for example the Vienna test bed and that the PLANETS test bed which are closely related, is a piece of software which gives prominence to significant properties. The SHAMAN Integration Subprojects (ISPs) are testbeds set out to embed preservation features into production and reuse environments. In addition, commercial companies like Tessella have systems (in their case, SDB) that can operate at scale (e.g., migrating complex logical objects consisting of hundreds of thousands of files) and into which new tools can be plugged. This allows their customers to test tools and techniques on content held within their repositories than might not be able to be sent to an external testbed (e.g., for size or security reasons). At the level of bit preservation there are numerous digest techniques, and indeed digests of digests such as ACE [11].

In this work package we will look across partners and beyond to identify candidate testing techniques. These will be classified and themselves tested against various types of data and scenarios; a number of open competitions will be organised to encourage a competitive spirit. Ultimately we aim to produce a collection of testbeds environments from either APARSEN partners that are able to provide such services which will include testing procedures and test data, together with test software if appropriate, which can be used to provide a common measure for digital preservation techniques. We recognise of course that this collection of tests will not be perfect but we believe it will be possible to provide a benchmark regarding test environment capabilities.

At the very least the testing should provide evidence about effectiveness of the tools against changes over time in hardware, software, environment and changes in the knowledgebase of the Designated Communities.

Task 1410: Identification of testbed techniques and tools

This task collects together the various testbeds environments which are available.

Task 1420: Testbed Environment suite

This task produces a testbed environment suite with associated testbed procedures. To facilitate this partners will make their testbeds environments, procedures, test data and software available to other partners.

List of deliverables

  • D14.1 Report on testing environments (M14)

Description of deliverables

D14.1) Report on testing environments: This deliverable consists of a report which summarises the test environments which have been examined and the proposed common testing approach, and also provides a framework within which to evaluate the efficacy and applicability of proposed preservation tools and techniques, for example what types of digital objects the tool/technique is/is not useful for and what types of changes the tool/technique can guard again (changes in hardware, software, environment and knowledgebase of designated communities). [month 14]


Further revisions in response to feedback from Project Officer 12th/14th December

Task 1410: Identification of testbed techniques and tools

This task collects together the various testbeds environments which are available within the project consortium of partners, and identifies previous test environments that have influenced the validation of digital preservation techniques and tools.

Task 1420: Testbed Environment suite

This task produces a testbed environment suite with associated testbed procedures. To facilitate this partners will make their testbeds environments, procedures, test data and software available to other partners. within the constraints of the funding available within the work package

List of deliverables

  • D14.1 Report on testing environments (M26)

Description of deliverables

D14.1) Report on testing environments: This deliverable consists of a report which summarises the test environments which have been examined and the proposed common testing approach, and also provides a framework within which to evaluate the efficacy and applicability of proposed preservation tools and techniques, for example what types of digital objects the tool/technique is/is not useful for and what types of changes the tool/technique can guard again (changes in hardware, software, environment and knowledgebase of designated communities). [month 26]

-- SimonLambert - 2012-10-15

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r5 - 2013-01-10 - AshHunter
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback