Academic Careers Understood through Measurement and Norms (ACUMEN) addresses the current discrepancy between the broader social and economic functions of scientific and scholarly research in all fields of the sciences, social sciences and the humanities and the dominant criteria for evaluating performance by researchers. The assessment of the performance of individual researchers is the cornerstone of the scientific and scholarly workforce. These evaluations happen at different stages of the careers of researchers and come in different forms, among others: job interviews, annual performance assessments, journal peer review of researchers’ manuscripts, and reviews of grant applications. These evaluations have a tremendous influence on all aspects of knowledge production. Moreover, the very criteria of what counts as excellent and relevant research for the next generation of researchers will be strongly influenced by their current experiences in the regular evaluation exercises to which they are subjected. It is therefore urgent that the criteria used in evaluations at the individual level have a clear and well-understood relationship with the requirements that scientists and scholars will need to meet in the near future. Understanding the ways in which researchers are evaluated by their peers and institutions is crucial for assessing how the science system can be improved and enhanced.
Within the framework of the Lisbon Agenda, ACUMEN will develop criteria and guidelines for Good Evaluation Practices (GEP). These will culminate in recommendations for an ACUMEN Portfolio of evidence in support of individual career achievements for researchers throughout science and engineering, social science and the humanities. Each researcher’s ACUMEN Portfolio will combine multiple qualitative and quantitative evidence sources about their career and contributions. The structure of ACUMEN Portfolios for individual academics will be based upon comparative research on: 1) the peer review system in Europe; 2) new quantitative performance indicators; and 3) new ways of using the web by researchers. ACUMEN is based on a specific conceptualization of what evaluation in scientific and scholarly research performs, and on a diagnosis of some of the key problems in the current evaluation system in Europe. These are the main ideas that drive the research agenda of ACUMEN. This section will first explain these ideas, subsequently specify the research goals of ACUMEN, and then explain how they relate to the goals of the Call.
ACUMEN departs from the dominant definition of evaluation in three different dimensions. First, evaluating research performance is usually conceptualized as the more or less straightforward measurement of the production of institutions, groups and individual researchers working in these settings. These researchers are subjected to the evaluation, and although in some countries they are usually asked to prepare forms of self-evaluation, they are not seen as the central actor in the evaluation process but as its object. Second, evaluators usually assume that the evaluation process itself is neutral with respect to the outcomes. Their vision is also often limited to the specific evaluation at hand, and they have no overview of the cumulative effects of evaluations of individuals on the scientific and scholarly system at a higher level of aggregation. In other words, evaluators have a systemic blind spot with respect to the constructive effects of the evaluation criteria and process. Third, evaluation is usually conceptualized in universal, more or less timeless, concepts such as “excellence”, “originality”, and “social relevance”. These concepts basically function as container concepts that can have quite different meanings in different contexts. Framing evaluation in these container concepts often prevents an understanding of the social and cultural variation in actual evaluation practices. These practices are still poorly understood, partly due to the confidential nature of job assessments, journal publication peer review, and grant application evaluations. As Lamont (2009) has remarked: “Peer review is secretive”.