Observing & Measuring Knowledge Maturing

In a nutshell

Knowledge Maturing Indicators make knowledge maturing traceable. They can be used for diagnosing, monitoring, and evaluating knowledge maturing.

Parent section

Concept of Knowledge Maturing

Further information

MATURE Deliverable 1.1

MATURE Deliverable 1.2

MATURE Deliverable 1.3

Knowledge maturing processes, similar to individual learning processes, can be key to an organization’s success,  but they are also hard to observe and measure, e.g., in terms of efficiency. Still, measurability is crucial for several aspects:

  • The appropriate forms of learning and way of dealing with knowledge differs considerably between the maturing phases so that any tools that support knowledge maturing need to be aware of the maturity. This can only be achieved if we have indicators that can be calculated automatically and that can feed into the tools themselves.
  • Key to successful introduction of knowledge maturing support are incremental approaches with feedback loops that start small, are then evaluated, and then grow bigger. What we therefore need are closed-loop approaches that are based on indicators that assess the effects.
  • Measures to improve knowledge maturing needs to integrate with others forms of controlling to justify the investments made and track the contributions towards overall business goals of the company.

It has been found early on that it is impossible to come up with direct, context-free and universally applicable measures for knowledge maturing or knowledge maturity, knowledge maturing indicators were conceived as observable events or states that need to be interpreted carefully in order to support the evaluation of the construct knowledge maturing which is difficult to measure and thus, especially in combination, suggest that knowledge maturing has happened.

Indicator areas

  • Artefact. Indicators measuring aspects related to any form of artefact (e.g., documents, process models).
  • Individual capability. Indicators measuring on the individual’s experience, competence, or knowledge.
  • Sociofacts. Indicators going beyond the individual knowledge (e.g., measuring quality of social interaction).
  • Impact. Indicators considering knowledge maturing processes as a black box and just measuring business impact.

Artefact-related Indicators

Artefact related criteria seem to be the most straightforward criteria to use because artefacts (if they are in a digital form) are easy to access and analyse. But what can we derive from characteristics of artefacts about the collective knowledge in an organisation that they supposedly help to materialise?

The underlying assumptions are the following:

  • A higher quality (fitness for use or usefulness) of artefacts reflects the maturity of the underlying knowledge. One cannot produce a high-quality artefact without having sufficiently mature knowledge.
  • Because knowledge maturing expands the scope of the “audience” of that knowledge, this usually involves boundary crossing for which appropriate artefacts are produced as boundary objects so that one can also assume that artefacts will be produced. However, this is also a limitation: this criterion can only cover knowledge that can be and is made explicit.
  • A different perspective is a more collective one that does not aim at an individual piece of knowledge, but rather at an organizational capacity: if the organization is able to produce high-quality artefacts, it has also effective knowledge maturing processes. This was particularly a perspective that was mentioned by interviewees in the representative study. It resonates well with quality management initiatives with their strong underlying assumption that high process quality leads to high product or service quality.

For artefact-related criteria, we have identified two sub criteria:

  • Quality. This refers to characteristics that are inherent to the artefacts or at least not dependent on a context, e.g., the customer context. This includes indicators for the artefacts as such, e.g., readability, link density, structuredness, etc.  
  • Usefulness. Quality does not mean that it is useful for someone if quality is not defined with respect to fitness for use and thus from a customer perspective, but e.g., as conformance to requirements from a producer perspective. High quality artefacts in that latter sense can be useless, while low quality artefacts can sometimes help. So this sub criterion includes judgments about appropriateness.

Both sub-criteria can utilize the same kind of indicators, but with different interpretation (and potentially slightly different settings), e.g., rating/assessment: you can assess a document with respect to quality from a context-free producer perspective, from an application perspective taking into account the context of creation, i.e. you can assess it according to how useful it was for your own problem situation in which you have used it, or taking into account the context of potential re-use, i.e. reflecting the customer perspective. Likewise, you can interpret usage indicators in terms of usefulness or quality, e.g., if it gets updated, it could be traced back to its low quality, or to its usefulness, which makes it worth updating. Further criteria related to quality or usefulness derives their information from the creation context: who created it, how diverse was the group, for which purpose was it created? and from the context of reuse: who might reuse it, how diverse might that group be, for which purpose might it be reused?

Individual-related Indicators

This criterion covers the contribution of individual learning to knowledge maturing. We have distinguished knowledge maturing from individual learning: the former is an advancement of knowledge on the collective level while the latter is limited to advancements on the individual level, so that individual learning is a prerequisite for knowledge maturing, but not sufficient.

More precisely:

  • Knowledge maturing requires individual knowledge, experience, or competence. Individuals can only improve existing practice, or create new practice if they have the capacity for that.
  • If sharing and passing on of knowledge works well, then learning of the individual leads to collective learning, too.
  • However, as part of the qualitative data collected during the interviews, interviewees frequently had concerns that experience can also have an opposite effect on knowledge maturing as it makes you professionally blinkered (skilled incompetence) so that we cannot simply take a cumulative perspective (i.e., the amount/duration of experience). It was suggested that the diversity of experiences needs to be taken into account. In some cases even, employees coming from outside were seen as one of the major triggers for knowledge maturing, sometimes much more than internal sources.

Even if we argue that individual capability is a good criterion for knowledge maturing, it remains a hard problem to assess it, which is well-known in the areas of competence management (competence diagnosis), and also emerging domains like e-portfolios and certification of informal learning outcomes. In those domains, you speak of “evidence” for a certain competency or experience. It remains a challenging task because:

  • competence, knowledge, experience, are frequently as unobservable as knowledge maturing is,
  • it is of no big use to consider “experience” or “competence” of an individual in general, because they are always related to certain competency domain or area of experience which further increases the complexity,
  • evidence is always highly contextualized and it is not obvious to separate context from a more general competency so that it is always methodologically challenging.

Some commonly used types of evidence include individual performance (in a task, project etc.), reputation (pre-dominant in the scientific field), diversity of experience, or the role in a social network, which is related to the socio-fact dimension. Demonstrator 3 on People Tagging has investigated this in more detail for its search heuristics. 

Sociofact-related Indicators

Socio-facts which comprise rules, collective practices etc., are much less accessible for assessment than the artefact-related criteria. Still, sociofacts represent an important source for learning about knowledge maturing.

  • On a more specific level, it is assumed that the more mature knowledge about a subject is, the higher the level of agreement is in the collective. This is most obvious if it is about ontological knowledge, i.e., knowledge how to describe things: a shared vocabulary can only be mature if it is really shared and agreed upon by the respective group. But this can also be illustrated for process knowledge: if an expert designs a process, this is still immature; it only becomes mature if the process becomes part of daily practice, contributing the knowledge how to operationalise it. This has an overlap with artefact-related criteria like scope of use or scope of creators which indicate a degree of agreement.
  • On a more collective level, it is assumed that organizational competencies to learn are a prerequisite for mastering knowledge maturing processes. A learning organization is more capable of knowledge maturing. Again, this perspective was largely introduced through the interviews, where a lot of indicators around human resources development, quality of collaboration, the presence of reflective processes, or even the fact that the organisation develops further were mentioned. This can be viewed as a collective capability, which aggregates individual capabilities.

Related publications


Johannes Moskaliuk, Andreas Rath, Didier Devaurs, Nicolas Weber, Stefanie Lindstaedt, Joachim Kimmerle, Ulrike Cress
Automatic detection of accommodation steps as an indicator of knowledge maturing
Interacting with Computers, vol. In Press, Accepted Manuscript, 2011, pp. -


Sally-Anne Barnes, Jenny Bimrose, Alan Brown, Daniela Feldkamp, Andreas Kaschig, Christine Kunzmann, Ronald Maier, Tobias Nelkner, Alexander Sandow, Stefan Thalmann
Knowledge Maturing at Workplaces of Knowledge Workers: Results of an Ethnographically Informed Study
In: 9th International Conference on Knowledge Management (I-KNOW '09), Graz, Austria, 2009, pp. 51-61

Abstract Maturity models are popular instruments used, e.g., to rate capabilities of maturing elements and select appropriate actions to take the elements to a higher level of maturity. Their application areas are wide spread and range from cognitive science to business applications and engineering. Although there are many maturity models reported in scientific and non-scientific literature, the act of how to develop a maturity model is for the most part unexplored. Many maturity models simply – and vaguely – build on their, often well-known, predecessors without critical discourse about how appropriate the assumptions are that form the basis of these models. This research sheds some light on the construction of maturity models by analysing 16 representative maturity models with the help of a structured content analysis. The results are transformed into a set of questions which can be used for the (re-)creation of maturity models and are answered with the help of the case example of a knowledge maturity model. Furthermore, a definition of the term maturity model is developed from the study’s results.