Menu Content/Inhalt
Home arrow WP6 - Evaluation
 

Workpackages

WP0 - Co-ordination
WP1 - Synergy Specification
WP2 - Information services
WP3 - Knowledge Management Services
WP4 - Mediation Services
WP5 - Implementation
WP6 - Evaluation
WP7 - Dissemination
WP8 - Training
Training
 
WP6 - Evaluation


Objectives
WP6 is mainly related to the specific objective “usability, acceptability and adaptability of tools” (note that all other objectives will be also affected). It aims at:
  • assessing the methodological approach adopted by the PALETTE project, as well as describing and operationalizing this approach (adaptability, acceptability and accessibility of the open-sources services);
  • providing a formative evaluation to the project at each phase (involving process and outcomes);
  • establishing how a community of practice can be supported more effectively using configurations of information, knowledge management and mediation services (evaluation for knowledge) designed with their participation and depict examples of practice that can be used as a wider community resource.
The output expected to be produced by WP6 accomplishes the operational objectives “Development of an evaluation framework” and “Evaluation feedback”

 
Documents & deliverables
Public deliverables:
 
  • D.EVA.01: A framework plan for the evaluation and depiction of PALETTE processes and outcomes (M6)
Summary:  This deliverable sets out the approach to be adopted for the evaluation of PALETTE design methodology. It is not concerned at this stage with the evaluation of the tools and services to be developed during the project but with the project method itself. It uses the RUFDATA methodology (RUFDATA standing for Reasons and purposes, Uses, Foci, Data and Evidence, Audience, Timing, and Agency of the evaluation) as a means of profiling the approach, describes the evaluation approach, defines its indicators from a usage point of view, and sets out a plan for the evaluation. It also contains as appendices examples of the production of 'provisional stabilities' (depictions of the project to aid its development) and a statement of PALETTE method developed by WP1 as a reference.
 
  • D.EVA.02: Framework plan for the evaluation of the methodological approach of PALETTE (M12)
Summary: This deliverable elaborates the framework for the validation and evaluation of PALETTE services and scenarios.
 
  • D.EVA.03: Report on the first evaluation of the PALETTE project. (M18)
Summary: This deliverable brings together the evaluations of the experience of the PALETTE project as a way of working using the broad evaluation methodology developed in D.EVA.01. It builds on the trials of  instruments and methods (developed in Task 1) based on agreements on the administration protocols with project work packages. It uses data (on-line questionnaires, interviews) procured in three sweeps of data collection since the beginning of the project by constructing 'provisionally stable' scenarios and experiences. Through providing a formative feedback on the project, the deliverable provides resources for training sessions based on the scenarios and experiences identified.
 
  • D.EVA.04: Report on the responses of the PALETTE project to formative evaluation (M24)
Summary: This deliverable brings together the experience of the PALETTE project on formative evaluation and is in the tradition of evaluation as knowledge. In other words it contributes to our understanding of how evaluation is used and understood in a complex project like PALETTE. It details the way the evaluation’s provision of 'provisionally stable' scenarios and experiences has been used and by whom.
  • D.EVA.05: PALETTE evaluation framework and instruments (M30)
Summary: This report sets out to improve the understanding of evaluation taking place in the complex context of a large multicultural, trans-disciplinary project like PALETTE. To embrace the wide variety of types of evaluation taking place, this report goes beyond conventional visions of evaluation as the province of professional evaluators to include moments of evaluation embedded in other activities carried out by what might be called lay evaluators.

 
Members
CSET
EPFL
UNIFR
INRIA
CTI
CRP-HT
UT
ULG
GATE CNRS