Enabling teachers to develop pedagogically sound and technically executable learning designs

A learning design describes a sequence of learning activities that learners undertake in order to help them achieve particular learning objectives, including the resources and services needed to complete the activities. Unfortunately, there is no learning design language which can be used by teachers to describe pedagogical strategies explicitly and which can then be interpreted by machine. This article proposes an approach to support teachers' design processes by providing pedagogy‐specific modeling languages for learning design. We have developed a conceptual framework, based on activity theory, which can be used to specify pedagogical semantics and operational semantics of a pedagogy‐specific modeling language. We present our approach by using peer assessment as an example. Through analyses we conclude that enriching pedagogical semantics and operational semantics of vocabularies of the modeling language may be a promising approach to developing a new generation of learning design languages, which enable teachers to develop pedagogically sound and technically executable learning designs.


Introduction
Open and distance learning provides learners with more accessibility and flexibility. However, distant learners and teachers may have difficulties in coordinating their interactions because they lack the rich communication channels available in face-to-face contexts. There is a need to provide computational coordination mechanisms to orchestrate activities. In addition, more and more computer application tools are used in distance education. It would be efficient for a computer-based mechanism to automatically configure computer-based teaching and learning environments for the right people at the right time.
The emergence of the learning design concept holds promise as a possible solution to meeting the requirements identified above. A learning design is a description of a pedagogical strategy using a learning design language, an educational process modeling language. If a learning design is represented as a formal model, a machine-interpretable form, it can be used to coordinate interactions and to configure workspaces in a language-compatible execution environment (Koper & Olivier, 2004). A technically *Corresponding author. Email: Yongwu.Miao@ou.nl executable learning design can support teachers and students to focus on their substantive teaching and learning activities, minimizing concerns about coordination and logistical activities (Koper & Bennett, 2008). As a consequence, a formal learning design, if well documented, can enhance the efficiency of the teaching and learning process in distance learning.
In the past decade, many learning design languages have been developed. Falconer and Littlejohn (2008) distinguished two categories of learning designs: those meant for learning (the executable design) and those meant for teaching (the inspirational design). The audience of the former is a machine, and the audience of the latter is the teacher. In order to represent a learning design that can be processed automatically by a machine, the learning design languages in this category, such as IMS Learning Design (IMS LD) (IMS Global Learning Consortium, 2003), E2ML (Botturi, 2006), and LAMS (Dalziel, 2003), have to be specified in explicit syntax and semantics. These learning design languages enable the detail, logistics, and technical services/ tools required to execute the learning design to be described (Agostinho, 2008). However, as noted by Oliver and Littlejohn (2006), these learning design languages can clearly describe the mechanics of a learning design but cannot clearly illustrate the pedagogical ideas of the design; that is, most pedagogical ideas are embedded in code unfamiliar to teachers, except for those partially represented in attributes such as title and description.
In contrast, learning design languages for teachers, such as LDVS (Agostinho, Harper, Oliver, Hedberg, & Wills, 2008;Bennett, Agostinho, Lockyer, Harper, & Lukasiak, 2006), LDLite (Oliver & Littlejohn, 2006), and 8LEM (Verpoorten, Poumay, & Leclercq, 2006), usually have no explicit syntax and semantics specified. The pedagogy and the rationale of the actual design are described informally through the use of textual description and/or visual diagrams. Such learning design languages are easy to use, but the learning design represented in these languages may be ambiguous; hence, based on current artificial intelligence technologies, difficult and even impossible to be interpreted by computer. Waters and Gibbons (2004), Oliver (2006), and Agostinho (2008) have called for a common learning design language that can be applied and consistently understood by practitioners (like teachers), researchers, and technical support staff. They have suggested that a notation system, similar to that found in other disciplines, such as music and dance, is needed for learning design. Unfortunately, there is no such notation system that can facilitate communicating and sharing learning designs among practitioners, researchers, developers, and the computer.
The objective of our work described in this article is to support practitioners in developing pedagogically sound and technically executable learning designs. We explore a possible solution to design notations that can be used by practitioners to represent pedagogical strategies. Further, the syntax and semantics of the notations are defined explicitly so that they can be interpreted by machine. We argue that a pedagogy-specific modeling language may be a promising approach to help teachers to develop, communicate, understand, adopt, and adapt learning designs and enable computers to interpret, instantiate, and automate learning designs in practice as well.
In the first section of this article, we present a conceptual framework based on activity theory. Then in the second section, we explain how to develop a pedagogyspecific modeling language by using peer assessment as an example pedagogy. In the third section, we describe how to represent a peer assessment design using the peer assessment modeling language. After discussing several specific and general issues we present our conclusions in the fourth section.

A conceptual framework based on activity theory
A learning design is a description of a sequence of teaching and learning activities. Activity is the central component of a learning design language. In this section, we briefly introduce activity theory, which provides us a powerful sociocultural lens through which we can analyze most forms of human activity. We have developed a conceptual framework based on activity theory, which can be used for specifying the pedagogical and operational semantics of notations.

A brief introduction to activity theory
Activity theory is 'a philosophical framework for studying different forms of human praxis as developmental processes, both individual and social levels interlinked at the same time' (Kuutti, 1996). The founder of this theory, Vygotsky (1978), developed the idea of mediation and claimed that human activities are mediated by culturally established instruments such as tools and language. Leont'ev (1947Leont'ev ( /1981 further suggested that activities are also mediated by other human beings and social relations. Activity theorists (e.g., Engeström, 1987;Engeström, Miettinen, & Punamäki, 1998;Kuutti, 1991) consider the activity to be the minimally meaningful unit of study. Engeström (1987) proposed the structure of human activity as illustrated in Figure 1. In this model, the subject refers to an individual or a group of individuals who are involved in an activity, for example, evaluation. The object refers to raw material (e.g., an article) or an abstract (e.g., an idea) to be transformed into an outcome (e.g., comments). The instrument can be any tool (e.g., a pen) and any sign (e.g., jargon) that can be used to help the transformation process. The community is a group of people (e.g., teachers and students in a project) who share the same general object. The rules are laws, standards, norms, customs, and strategies that govern behaviors of community members within the activity system (e.g., a student has to review at minimum three and at maximum five articles of his/her fellow students in one week). A division of labor is the distribution of tasks and powers between the members of the community (e.g., student A is responsible for reviewing the articles on the topic of constructivism). Figure 1. Activity system (from Engeström, 1987). Leont'ev (1978) proposed a three-level structure of activity. As depicted in Figure 2, human activity is driven by an objective-related motive and carried out by a community. The activity consists of goal-directed actions or chains of actions performed by an individual (or group). Actions are realized through operations, which are driven by the conditions of the concrete situation and are related to routinized behaviors performed automatically (Kuutti, 1996). Although we do not explore the full richness of the theory in this article, we will demonstrate that it is fit for our goal, namely, to divide a pedagogical scenario into different components.

A conceptual framework
In this section, we describe the various components of our conceptual framework.
(1) Stage: The stage is used to describe the goal and intention of activities performed in a period of time. Multiple activities can or have to be performed in a stage to achieve the goal. A stage is completed when the goal is achieved or aborted. A sequence of stages makes up a learning/teaching process to be described in a learning design.
(2) Role: The role is used to distinguish different types of people in the community. A role can be broken into sub-roles if necessary. Thus, a group or an organization can be modeled on a hierarchical structure. From the perspective of process modeling, a role can have attributes and memberships. It is necessary to distinguish between nominal roles (e.g., teacher and students) and behavior roles (e.g., tutor and chair). (3) Activity and Activity-structure: The activity is used to specify a logical unit of a task performed individually or collaboratively. It is very important but difficult to identify an activity in process modeling. For example, the statement 'each student in a pair writes a report' means that there are two individual activities. The statement 'two students in a pair write a synthesis' means that there is one collaborative activity. Sometimes a complex process consists of a set of activities performed in sequence and/or in parallel. For example, first,  all students write a draft independently; then they merge their drafts into a synthesis. For modeling such a complex process clearly, the concept of activity-structure is introduced. Four types of activity-structures can be specified: sequence-structure (all activities will be performed in a prescribed sequence), selection-structure (a given number of activities selected from a set of candidate activities will be performed in any order), concurrent-structure (a set of activities is performed concurrently by the same individual or by different individuals), and alternative-structure (one or more activities will be selected to perform from a set of candidate activities according to a prescribed condition). The activity has generic attributes like title, description, state, and completion condition. (4) Object, Outcome, Artifact, and Information Resource: The artifact is used to represent an outcome that is created and introduced within an activity and then may be used and transformed in the same and/or other activities as an intermediate product and/or a final product. Because an artifact produced in an activity may be used as an input of another activity, artifact is a kind of object of the activity as well. The information resource is used to represent another kind of object, which is available (e.g., learning material) before and during the learning process and will not be modified and transformed in the whole process. For example, if an assessment form is available in design time and will not be changed during the learning process, it should be defined as an information resource. If an assessment form will be created by a participant in execution time, it should be defined as an artifact. Usually an artifact refers to a tangible or a digitalized object such as an article, a physical model, a questionnaire, or a comment. An intangible outcome such as a friendship and a consensus is not an artifact. However, if an intangible outcome is explicitly represented in the form of digitalized entity and can be accessed later on, it can be regarded and modeled as an artifact as well. For example, a consensus can be represented as an artifact with two attributes: achieved (true/false) and a description of the consensus. (5) Service: According to activity theory, the instrument refers to all the means which the subjects have at their disposal for influencing the object and for achieving the goals. In a computer-based distance learning environment, we use the term service to specify a computer-based application for handling certain types of artifact/information resources (e.g., a test tool or a simulator) or for facilitating communication (e.g., a chat or a forum). (6) Rule: The rule is used to specify dynamic features of a learning process. It consists of conditions, actions, and/or embedded rules. The condition is used to represent a situation which can be interpreted in mathematic expression. Examples of conditions are whether an activity is completed, whether an artifact is available, and whether the scheduled time is over. The action is used to specify an elementary part of work such as allocating a task, finishing an activity, transferring an artifact, and calculating the mean of a collection of scores. A rule is represented in a form of if (condition) then (actions/rules) else (actions/rules). Sometimes a rule has no condition.
The components of the conceptual framework have been described above. Their relationships can be summarized thus: following certain rules, people with various roles perform activities in sequence or parallel within various stages to produce outcomes by using objects and services. We recognize that activity theory entails more components than we have described. Our use is rather instrumental but it encouraged our conceptual thinking on how to define activities as a hierarchical structure consisting of several more detailed components. In the next section, we will present how to develop a pedagogy-specific modeling language according to this conceptual framework through the use of peer assessment as an example pedagogy.

A peer assessment modeling language
In this section, we first describe the development of a peer assessment modeling language and then discuss how to represent a peer assessment design with the peer assessment modeling language.

The development of a peer assessment modeling language
Peer assessment can be characterized as the process in which students collaborate and evaluate their own performance as well as those of fellow students (Sluijsmans, 2002). As is the case with any complex skill, becoming competent in peer assessment requires training and support in, for example, defining assessment criteria, judging the performance of peers, and providing feedback for further learning. Research indicates that, especially in the initial development stages of the peer assessment skills, clear guidelines and supporting tools are essential for students and their teachers (Fastré, van der Klink, Sluijsmans, & van Merriënboer, 2008). If peer assessment is not designed well, it can easily result in an ineffective learning experience (Boud, Cohen, & Sampson, 1999). The success of peer assessment depends greatly on how the process is set up and subsequently managed. It would be very useful if a learning design language could be developed for teachers to represent and communicate best practice in peer assessment, develop a peer assessment plan, and then scaffold students conducting an online peer assessment in distance education.
A peer assessment can be represented in IMS LD, which is an open e-learning technical standard. It is a pedagogy-neutral learning design language that can be used to describe a wide range of pedagogical strategies as computer-executable models. However, it is impossible for practitioners to handle the technical complexities required for authoring complicated learning designs like peer assessment using IMS LD (Miao & Koper, 2007). Instead of making adaptations to IMS LD, we adopt a domain-specific modeling (DSM) approach to develop an advanced learning design language. DSM is a kind of model-driven approach to developing software applications. It was originally applied to enable end users to model business processes. A DSM language raises the level of abstraction beyond programming by specifying the solution in terms of concepts and associated rules selected from the very domain of the problem being solved (DSM Forum, n.d.). In education, the concept of domain can be understood as the pedagogy. In a particular pedagogy, the vocabularies used to describe the pedagogy can have more specific and more elaborate meaning. For example, in peer assessment, the terms responding, questionnaire, and score have more specific and concrete pedagogical semantics than the generic terms like learning activity and property used in IMS LD. In addition, the operational semantics of the vocabularies can be explicitly defined as well. For example, the data type of a score is a number which can be assigned by the reviewer and can then be calculated and presented. If we choose these vocabularies, on the one hand, practitioners can intuitively apply these vocabularies to describe their pedagogical strategies. They don't need to describe a learning process by mapping the pedagogy-specific concepts to the coding terms or to very generic notations like activity, role, and property. On the other hand, a learning design represented using these vocabularies can be interpreted by a computer and transformed into executable models represented in a low-level modeling language like IMS LD.
In adopting DSM to develop a peer assessment modeling language, we began by selecting various categories of vocabularies based on the conceptual framework and clarifying their pedagogical semantics and operational semantics. We reviewed relevant literature (e.g., de Volder et al., 2007;Joosten-ten Brinke et al., 2007;Liu, Lin, & Yuan, 2001;Sitthiworachart & Joy, 2003;Sluijsmans, 2002;Somervell, 1993;Topping, 1998) to choose and compare vocabularies in an activity-centric manner. For each type of activity, we needed to specify in which stage and for which goal it would be performed. This raised a series of questions, such as: • Which role would carry out this type of activity?
• What kind of artifacts and possible information resources would be used as input and what kind of artifact would be the expected outcome in this type of activity? • What kinds of services would be needed to complete the activity? • What possible rules might be applied to specify how to accomplish an activity and how artifacts should be distributed?
For example, we can define an activity called responding by specifying its semantics and articulating its relationship to other vocabularies. The semantics of responding can be clearly defined within the context of peer assessment. Responding is an activity performed within the evidence creation stage by a candidate using a test tool to provide a response to a questionnaire. All vocabularies such as evidence creation, candidate, test tool, questionnaire, and response are pedagogically meaningful in a peer assessment. For each type of activity, certain rules can be selected. If necessary, parameter values should be assigned when the rule is applied. For example, a rule if a time limit ( ) is over then completing the activity can be defined and a time period (e.g., one hour) can be assigned to the parameter time limit.
In this way, we identified all vocabularies and specified the semantics and syntax for the peer assessment modeling language. When we put all vocabularies together and specified their relationship, we defined a meta-model of peer assessment processes. According to the meta-model, four types of stages make up a peer assessment process: assessment design, evidence creation, assessment, and reaction. In the assessment design stage, one or more activities such as constructing an assessment form, designing an assignment, preparing material, and designing criteria may take place. A designer can perform one or more activities and one activity can be performed by one or more designers. Performing design activities may produce assignment description, assessment forms, and/or criteria. Note that the assessment design stage may or may not be included in a peer assessment, because sometimes the assignment description and the assessment form have been defined before a peer assessment process starts. Regardless of whether the assessment design stage is included, a peer assessment actually starts from the evidence creation stage, in which one or more candidates work on assignments such as responding to a questionnaire or performing tasks according to an assignment description. Sometimes the performance will be observed and recorded by the observer. Then the assignment outcomes and/or record will be produced and distributed to the activities in a subsequent assessment stage, in which one or more reviewers will evaluate the allocated assignment outcomes using the assessment form and criteria, and finally provide feedback in the form of comments, rates, grades, ranking, and so on. In summative peer assessment, the process may end here. In formative peer assessment, typically a reaction stage will follow, in which the candidate may view or review feedback. Sometimes candidates further improve assignment outcomes and even require elaborate feedback. In the latter case, the reviewer may elaborate or provide additional feedback. In some situations, reaction stages and assessment can be repeated several times. Note that generic activities like reading, writing, presenting, discussing, and searching can be performed in all stages (we will not discuss these types of activities here).
The meta-model can be used as a high-level process modeling language to specify various peer assessment models. In fact, many details of the meta-model are represented as alternatives (e.g., score, grade, rating, and comment), constraints (e.g., time), and rules (e.g., a given number of review tasks per reviewer). When specifying a peer assessment model, one has to represent the design decisions to be made in terms of the modeling language. Thus one should specify, for example, how many participants will be involved and what roles they will play; which kinds of assignments (e.g., an essay, a list of multiple-choice questions, and/or a demonstration) will be performed and whether individual candidates have a different assignment or the same one; whether each reviewer can review only one or more assignment outcomes of his/her peers; whether assessment is anonymous, confidential, or public; whether assignment outcomes will be distributed reciprocally or mutually. When all necessary design decisions are represented, a peer assessment model is created. Oliver and Littlejohn (2006) have suggested representing pedagogical practice in an appropriate form that teachers can easily apply, adopt, adapt, and reuse. They developed a learning design language LDLite, with which practitioners could represent a lesson plan with a sequence of online and face-to-face activities by using a matrix. In order to enable practitioners to use the peer assessment modeling language easily, we offer a similar form for practitioners to document a peer assessment design. However, our form is based on the meta-model and will be implemented as an authoring tool to support practitioners to represent a peer assessment design easily. As illustrated in Table 1, a peer assessment design represented in our peer assessment modeling language consists of two parts. The first part of the table describes the generic features of a peer assessment design. The second part of the table articulates the assessment procedure.

Representation of a peer assessment design using the peer assessment modeling language
In Table 1, the cells with bold text are not editable areas. A designer can represent design ideas by inputting information or selecting options. A designer (for instance, a teacher) has to provide information for mandatory items, which are indicated with the symbol (m); those indicated by (o) are optional. For example, all participants, nominal Table 1. Peer assessment design form.

Goals (o)
Of staffs and/or students? Time saving or cognitive/affective gains?

Prerequisites (o)
What work/knowledge is required to do/have before taking this assessment work?

Description (o)
A narrative overview of the learning scenario

Design rationale (o)
Why make such a design?

Estimated duration (o)
How long will this assessment take on average?

Learning content and levels (o)
What qualifications will be assessed and at which level?

Population (o)
What are the characteristics of the target group?

Groups and participants (m)
Who will be involved? -by assigning a label for each participant roles, and organizational structures must be listed. If there are expected outcomes that will be produced in the peer assessment, they should be listed as artifact items. A teacher can design a peer assessment process by choosing a stage from a set of predefined stages, for instance, an assessment design stage (see lower section of Table 1). Then the teacher can choose a type of activity which can be performed in the assessment design stage, for example, a designing assignment activity. If necessary, the teacher can specify attributes of the activity such as the title and description of the activity. Designing with an authoring tool, the teacher will be prompted to select a type of assignment such as a text-based description or a questionnaire. If the teacher selects a questionnaire, an instance will automatically appear in the outcome column in the same row as the design assignment activity. Then the teacher will be prompted to select an existing questionnaire or create a new questionnaire. For the former choice, the teacher should give a link to a Web page or to a local file where the questionnaire is accessible. For the latter, a questionnaire authoring tool can be launched when the teacher clicks on the questionnaire. The teacher also needs to specify which participants should play the role of the designer for this activity. If necessary, the teacher can create an information resource in the object column or drag an information resource item in the list from the first part of the form and drop it into this cell. For some types of activities, a special service, such as a forum, can be defined. By right-clicking the mouse button in the rule column, situated rules will be presented from which the teacher can choose. For instance, by choosing if a time limit ( ) is over then completing the activity the teacher can assign a time period (e.g., one hour) to specify this rule. The authoring tool can check for conflicts and incomplete definitions on demand. Guided by the authoring tool in this manner, the teacher represents a peer assessment design step by step until it is complete.

An example peer assessment design
Before implementing the target authoring tool, we conducted an internal test to investigate the possibility of teachers representing peer assessment designs using the modeling language. The test was undertaken by two researchers who had substantial teaching experience in higher education but who did not participate in any of the previous development stages of the modeling language. Prior to the test, they received information about the philosophy and background of the modeling language, as well as a description. Then they selected two authentic peer assessment scenarios and decided how their peer assessment elements matched the modeling language. Because of space limitations, we will just briefly describe the first scenario, which was part of a course in learning theories for students attending a master's program in educational sciences. The second example, which entails a peer assessment in traditional face-to-face teacher education, is not included in this article. Table 2 illustrates a representation of the first peer assessment scenario, which was created by the two researchers using the peer assessment modeling language. The test results reveal that it is possible for teachers to represent their peer assessment practices using the modeling language, although it remains difficult. The difficulties can be summarized thus: distinguishing between the role of planner at design time and the role of designer at runtime, using advanced modeling features (e.g., activity-structure and distribution rules), and representing highly complex peer assessment practices. These test results are valuable for further improvement of the modeling language and the development of the authoring tool. Table 2. Example of peer assessment scenario as applied in a distance education course.
Title Peer assessment in a distance course on learning theories

Objectives
Increasing students' awareness of their own peer assessment skills

Prerequisites
The entire course entails nine tasks. Students need to finish their previous course tasks before they are allowed to start task 6 with the peer assessment exercise

Description
• Each student needs to write an essay • The students upload the essay and the essay is distributed to a peer student • The peer student assesses the essay, writes a feedback report, and uploads it • Students reflect on the feedback, write a short note on how they utilize the feedback for improving their own essay, and then finalize their essay • Students submit all documents to their teacher

Functions
The peer assessment exercise is obligatory for passing the exam of this course

Estimated duration
From start to finish approximately two weeks

Privacy
Peers provide feedback on the essay anonymously

Learning content and levels
Basic knowledge on Gagné's learning theory

Population
Students are adults and used to learning individually, at their own pace

Discussion
This section discusses two specific issues concerning the peer assessment modeling language and two relevant generic issues. The first specific issue is whether a peer assessment modeling language can help teachers develop pedagogically sound learning designs. Because the conceptual framework of the modeling language is based on activity theory, we hope that teachers are able to easily understand the conceptual framework and think about and reflect on their design according to the conceptual framework. Furthermore, the vocabularies of the peer assessment modeling language are close to those teachers' use in daily practice. They do not need to describe a learning process by mapping the peer-assessment-specific concepts to generic notations like activity and property used in IMS LD. The results of our test reveal that teachers, with little training, may be able to represent their own peer assessment scenarios and understand the peer assessment designs developed by others, although sometimes with difficulty. Moreover, semantics and syntax of the language are also clearly defined as the meta-model. For example, responding activity has to be performed in evidence creation stage by a candidate to answer a questionnaire and to produce a response using a test tool. Another example is that the task load for each candidate might be better balanced. That is, pedagogical knowledge and principles are embedded in the design of the peer assessment modeling language. They can affect the teachers' design in two ways. Firstly, they can be used as guidance for teachers to make appropriate design decisions. For example, when a teacher wants to define an activity in an evidence creation stage, only appropriate activity types including responding will be suggested by the authoring tool. When the teacher makes a design decision, for instance, selects a responding, the associated design decisions (e.g., specifying a test tool, a questionnaire, and a response in the responding activity) will be further suggested and even automatically made by the authoring tool as a default. Secondly, the pedagogical knowledge and principles embedded in peer assessment modeling language can be used to check whether a teacher's design violates certain pedagogical principles. For example, when the authoring tool checks the design and finds that some reviewers have to review too many assignment outcomes, while others have been assigned too little work, warning messages will be presented to the teacher. Thus, the quality of the peer assessment design will be guaranteed to some extent in terms of pedagogical soundness.
The second specific issue is whether a peer assessment design can be executed in a computer-based learning environment. Instead of developing an execution environment compliant with the peer assessment modeling, we are going to transform the peer assessment design into a technically executable model represented in IMS LD and IMS Question and Test Interoperability (QTI) (IMS Global Learning Consortium, 2006). In a comparison of the conceptual framework used for defining the peer assessment modeling language with the constructors of IMS LD, we find many one-to-one mappings from the components of the conceptual framework to the elements of IMS LD. For example, a stage can map to an act; a specific activity like responding or commenting can map to a learning activity or a supporting activity; a specific role like a candidate or a reviewer can map to a learner or a staff member; an information resource can map to a learning object; a service can map to an LD service. Although there is no corresponding concept of artifact in IMS LD, an artifact like a questionnaire, an assignment description, a comment, a status, or a grade can be interpreted as an IMS QTI assessment test or an IMS LD property with a certain data type (e.g., a file, a string, a Boolean, or an integer). An environment will be generated for each activity within which all associated artifacts, outcomes, and services will be included. Because of the technical complexity and limited space, we will not discuss the transformation algorithm further in this article. In summary, a peer assessment design can be transformed into an executable model codified with IMS LD and IMS QTI.
Finally, we discuss two generic issues. The first one is related to language flexibility. We compare pedagogy-neutral and pedagogy-specific approaches to developing an educational modeling language. As Koper (2001) pointed out, there are a lot of different standpoints from which one can answer questions about learning. Nevertheless, commonalities in education also abound. IMS LD was developed as a pedagogy-neutral modeling language through abstracting these commonalities. Because the elements defined in IMS LD are generic, IMS LD is sufficiently flexible to represent a wide range of pedagogies. In order to provide specific support for describing various peer assessment scenarios, we developed a peer assessment modeling language as a pedagogy-specific modeling language. In fact, a more generic assessment process modeling language (for modeling various forms of assessment processes) and a more specific peer assessment modeling language (e.g., for modeling the pure peer assessment involved with only two students) could be developed for working at different levels of abstraction. In general, there are hundreds of theoretical and practical theories and models of learning and instruction, such as competencebased learning, problem-based learning (PBL), mastery learning, experiential learning, and case-based learning (Koper, 2001). Teaching and learning based on a particular model has more commonalities than those in education overall. For example, although there are various strategies for PBL, there are many commonalities among the diverse PBL strategies. These PBL-specific commonalities could be abstracted to develop a PBL-specific modeling language. In particular, the commonalities could be abstracted at different levels, so that more specific PBL modeling languages could be developed for use in different contexts. In theory, a more pedagogically generic modeling language can be used to represent a wider range of teaching and learning strategies. However, the user has to construct pedagogically specific mechanics using generic notations. A more pedagogically specific modeling language is likely to be easier for teachers, because the vocabularies are more pedagogically meaningful to them. However, such a language will lose flexibility to some extent because the semantics and syntax of the notation have been more elaborately and specially defined.
The final issue is the degree of formality of the modeling language. Usually teachers are not accustomed to designing lesson plans formally. They may prefer to represent their design ideas informally. Even if their learning designs are not complete and accurate, teachers can interpret the designs by drawing on their knowledge and experience. However, a computer-based learning environment is currently not sufficiently intelligent to interpret incomplete and ambiguous representations. In order to make a learning design interpretable and executable, the learning design has to be represented in a formal modeling language and has to articulate all technical details such as appropriate date types and value domains. It is difficult and even impossible for ordinary teachers to handle the technical complexities. To enable both teachers and machines to interpret learning designs, our approach is to define domain-specific vocabularies and grammar as a meta-model, formally. For example, when defining feedback, teachers only need to choose a comment, a grade, or another type of feedback. They do not need to define a personal-or role-relevant property with a data type of a string or an integer. The operational semantics of the comment and grade have been defined in the language. Another example is the rule exchange artifacts mutually. This rule is formally defined in the language. When it is applied, the articulation of artifact flows among peers will be automatically specified according to the semantics of this rule. That is, if a modeling language is formally defined at a high level appropriately, teachers can represent their designs with the high-level vocabularies and rules. The highlevel representation can be interpreted by a machine according to the operational semantics defined in the language, so that teachers can avoid handling technical details and complexities.

Conclusions
In this article, we have presented a conceptual framework for an educational process modeling language based on activity theory. We have outlined our technical approach to applying the DSM paradigm to the development of a peer assessment modeling language based on this conceptual framework. Through analysis, we have drawn two conclusions.
(1) First, the commonalities of learning and teaching can be abstracted at different levels. The more generic the notations, the more flexible the modeling language, which can be used to describe a wider range of pedagogical strategies. The more specific the modeling language, the more support can be provided, particularly for inexperienced designers, to develop a learning design. If pedagogical knowledge and principles are implemented in the pedagogy-specific modeling language, the corresponding authoring tool can use them as guidance and checking mechanisms to improve the quality of learning designs.
(2) Second, through formally specifying the operational semantics of pedagogically meaningful vocabularies of an educational modeling language, a learning design represented in such a language is interpretable by a machine. Teachers will benefit from the use of such a language, because it is not necessary for them to handle the technical details and complexities needed for execution in a computer-based learning environment.
As the next step, we will implement the peer assessment authoring tool based on the peer assessment modeling language. We will implement the mapping algorithms to translate a peer assessment design into IMS LD code. In this article, we emphasize that the language and authoring tool are developed for distance and e-learning settings, but a spin-off may be that it could also be used by teachers in face-to-face situations.
The internal test has detected some issues that require further attention in the development of both the language and the authoring tool. However, further evaluation needs to be undertaken, in particular during and after the development of the authoring tool. Substantial groups of teachers employed in various educational sectors will need to be included in future evaluation activities in order to establish more comprehensive insights into the usability of this language in daily teaching practice. In addition, the work can be extended in the future to developing other pedagogy-specific modeling languages and corresponding authoring tools.