Year 1 EC Review Comments

This page details the comments and feedback provided to the consortium following the EC review of the D14.1 deliverable document

Initial Written Feedback in Draft from EC

Follow the link to the wiki page where this is coordinated: WrittenFeedbackPeriod1#2_2_2_WP14_D14_1_Report_on_testi

Scientific Officer's additional remarks following consortium's initial response to recommendations

For the M14 deliverables we accept the comments with the following observations:

D14.1 : the comments on the classification scheme seem to be based on an view that this was proposed as something definitive. In fact its purpose was to force us all to remember to think about the many different types of digital objects that are out there, because there is a tendency for us all to only look at what is familiar.


> This is far, far short of what the DoW promises for D14.1 (DoW PDF page 27).


EC are looking to see Test Environments made available for use by project partners and VCoE members. During the review they suggested something like a website where users would be able to submit their 'preservation task' and it would coordinate the submission and testing of task across a wide number of systems in order to come up with a result that indicated what test system demonstrated the best result, and therefore help the user make a judgement on what system performed best for their task.

This is a lot to ask of a relatively small WP. I do not think this is realistic, and neither did we during the planning phase for this WP, so we came to the decision that we could only document what systems were available and try to assess their capabilities through desk based research in the main.

-- AshHunter - 2012-03-15

This is one of the components of the glue which binds the consortium together. We all have our favourite preservation techniques but what we fail to do is to be explicit about the boundaries of their applicability. For example migration and emulation are good for simple documents and images (rendered, static, simple, passive) but not very useful for scientific data (non-rendered, static, complex, passive). The classification of types of objects forces us to look at the wider variety of objects; this classification is, we agree, only one of many possible classifications but it serves a very real purpose without needing to definitive, being based on a very wide combined view of what is out there.


> This additional explanation does not make D14.1 as presented any more acceptable.


The prominence of this statement should be reduced in any next release of the document to the EC as this is clearly not adding any additional value to how the report is received by the EC.

-- AshHunter - 2012-03-15

Similarly the test environments have their limitations. The many testbeds based on significant properties may work well for rendered objects but completely fail for scientific and other data. The VCoE must be able to provide guidance on preservation of all types of digital objects; D14.1 is fundamental to this aim and was the subject of much heated debate within the consortium. Similarly we need to be able to help with authenticity of all types of digital objects; that is the reason the concept of Transformational Information Properties was introduced in OAIS, going beyond the significant properties concept. The latter is simply not suited for many of the other types of digital objects. For these reasons the approach taken in D14.1 to force us to look at as wide a variety of digital objects is absolutely essential to many aspects of APARSEN and the VCoE.


> The problem here appears to relate to the purpose of the classification scheme. If it is merely a mnemonic device to remind consortium members that there is more in the preservation universe than 'simple documents and images' it adds nothing to the discussion of preservation and the outcomes of Aparsen as a project and as a VCoE.


A bit harsh, but clearly a viewpoint that one of the reviewers has that needs to be addressed in one way or another

-- AshHunter - 2012-03-15

More importantly, D14.1 is supposed to be a report on testing environments. The key concerns of the panel, as noted in the review report, are related to a deliverable for M14 which has not actually achieved any testing of potential environments, has not brought all available test beds together for comparison, has not provided any information on user experiences, methodologies, nature of the taxonomies to be used, nor even reference to relevant prior work.

The arbitrary nature of the classification scheme presented exemplifies the lack of rigour in assessing testing environments. Presumably, if the testing had been done this classification scheme would have been central to it. If not, what is its purpose? While being intrinsically flawed or at best incomplete it is also shown to be of no real purpose in the light of the primary deliverables of D14.1, a report on testing environments. This should not mean a list of currently available environments. It should be a methodologically sound compare and contrast of available environments with a view to providing a service to the community as part of the VCoE


I wonder if we could promise that we will prepare an online "Guide/Wizard" for preservation testing. It could be a web page with questions/answers which forwards the user to the right material (docs, tools, etc).

-- YannisTzitzikas - 2012-03-23

-- AshHunter - 2012-03-14

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r4 - 2012-03-23 - YannisTzitzikas
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback