Open Menu Close Menu

Education Plugs into Standards: Data and Service Specifications for Interoperability

Standards should serve to simplify the use of complex systems. Here, four experts examine how standards impact the interoperability of learning systems and propose a tool for gathering metrics and providing useful data for discussion.

Learning technology has changed dramatically as the Internet expanded, processor speeds increased, and storage capacity multiplied. From the point of view of teachers and learners, the limitations that hardware and software imposed on learning applications no longer constrain innovation. Better educational tools, more efficient languages for creating functionality, and richer conceptual structures with which to express educational ideas are now available.

These capabilities have enabled literally thousands of organizations to provide learning technology in the form of content, delivery systems, and component services. However, the expectation that a content management system, a collaboration infrastructure, and an enterprise environment from different providers can be combined readily or that a new functionality can be implemented on the shoulders of its predecessor is almost sure to be wrong. Without a solution to this "interoperability problem," the exuberant creativity that now characterizes learning technology will result in a fragmented and frustrating disarray of incompatible silos combining examples of what could have been.

The Impact of Silos
Quality educational content and effective component functionality are trapped within an infrastructure of silos. From both supply-side and demand-side perspectives the resulting underutilization of intellectual capital is not the best of situations in an era characterized by mobile learners, a global marketplace, and cost constraints. On the supply side, lack of interoperability makes in-house software development inefficient, hampers integration of components among partners, makes vendors less nimble competitively, and renders suppliers generally less responsive to marketplace conditions and customer requirements. On the demand side, the educational user is the big loser: Creating or incorporating educational content and tools into enterprise learning environments and everyday educational practice is prohibitively difficult and costly.

If the organizations use the same specifications for expressing requirements and for developing software, then, in theory, the resulting products and services will work together or fail to do so in predictable ways. However, specifications usually determine only an abstract level of interoperability, while practical interoperability depends on extending or limiting the flexibility of specifications to suit the specific requirements of a particular domain of application. Obviously, extremely abstract specifications or extremely specific "application profiles" defeat the purpose of standardization. In practice, specifications and profiles trade off the abstraction necessary for general interoperability and the particularization necessary for specific implementation.

Data Structure Definitions
The first generation of specifications developed by the members of IMS and other consortia provided definitions for data structures necessary to move various types of data (metadata, course content, assessment content, learner information, group information, etc. ) among components of learning systems and between learning systems and other systems in their enterprise environments. Profiles of such data specifications are the basis for content exchange in the Advanced Distributed Learning Co-Lab (ADL) Shareable Content Object Reference Model (SCORM).

Service Definitions
Service architectures are not new, e.g, OMG (Object Management Group) and CORBA (www.omg.org), and the IMS v05 architecture (www.imsproject.org), but they are evolving to recognize the importance of distinguishing core services like authorization and authentication that are provided by enterprise software systems, and the learning services that need them. More specifically, educational applications themselves need to focus on their domain objectives, for example, teaching electromagnetism in physics. They require information about who can use them, what permissions they have to interact with components of the application, and where the application should write files, logging information, etc. Though these are necessary, they are fundamentally distinct from the purpose of the application itself.

In fact, software applications for learning need services unique to the educational environment. These might include information about class membership, assessment services, and access to digital library resources. These education-specific services help the application developer focus on the learning objectives. Building services in this sense provides a critical abstraction between the essential but supportive role of enterprise systems, and the central domain-specific role of the educational application to facilitate learning (see "The Context for Interoperability").

The Context for Interoperability

The Sharable Content Object Reference Model (SCORM, www.adlnet.org) developed by the ADL Co-Laboratory (www.jointadlcolab.org)—now in revision 1.2—combines elements drawn from specifications developed by the Aircraft Industry CBT Committee (www.aicc.org), IMS (www.imsproject.org), and Ariadne (www.ariadne-eu.org) to provide interoperability between content and learning management systems. The ADL Co-Lab participants include some 15 U.S. Government agencies, and SCORM has been used widely to specify basic requirements for learning content.

The IMS Global Learning Consortium (www.imsglobal.org) develops open technical specifications to support distributed learning. Ten specifications have been released to date, and several are being adopted internationally as de facto standards for learning technology. All specifications developed by IMS are available to the public without charge through the IMS Web site. IMS is a non-profit organization supported by a worldwide consortium that includes 50 Contributing Members and 60 Developers Network subscribers. The IMS in Europe foundation supports activities for European members.

The Open Knowledge Initiative (http://web.mit.edu/oki) is a collaboration among leading universities led by the Massachusetts Institute of Technology (MIT), along with specification and standards organizations, to support innovative learning technology in higher education. The result is an open and extensible architecture that specifies how the components of an educational software environment communicate with each other and with other enterprise systems. OKI provides a modular development platform for building both traditional and innovative applications while leveraging existing and future infrastructure technologies. The first release of the core service interface definitions has been released through SourceForge, http://sourceforge.net/projects/okiproject.

[Authors' note: To join a discussion on the factors influencing interoperability, go to http://worktools.si.umich.edu and enroll in WorkTools. Then send a message to [email protected] to request permission to join the ALTI-meter work site.]

Dimensions of Interoperability: Building Application Profiles
Interoperability can be viewed in terms of a family of choices (Service Definitions, Data Definitions, Technology Choices, and UI/Application Frameworks), each of which limits the likelihood that the final product will continue to operate if it is moved to a new environment. Consider a basic service that enables a learner to save some data. To provide that service requires choosing a container for the bits of data, a means for creating and manipulating the data, and a user interface to present the data to the user and collect his or her response. Although the service itself is likely to be required by virtually any education or training environment, each of the choices made to implement it may limit the implementation to one sector or one kind of environment within a sector. For example, the data structures that are chosen for content may be suitable for a typical higher education environment, but not for a typical K-12 environment, or the data exchange services chosen may be suitable for a SCORM-compliant environment, but not for other training domains.

The complexity of choices and their resulting impact on interoperability is difficult to exaggerate. Some examples may help illustrate the work that lies ahead to develop an understanding of interoperability sufficient for the needs of learning technologists. The first example concerns the vexatious interoperability associated with printing. The second concerns the more general interoperability issues associated with getting people from scattered locations to come together for a working meeting at a distinct location. The latter (meeting) example underscores the importance of the kind of interoperability to engender the conditions for learning; the former (printing) example demonstrates the responsibility for directly engaging with users in the learning itself.

Consider the problem of printing from a desktop application to a printer or file. For simplicity, the configuration shown in Figure 1 omits the network routers, print spoolers, or other details that come into play.

Figure 1: The problem of printing from a desktop application to a printer or file saved to disk.

Even this conceptually simple process shows why systems architects define implementation-independent constructs such as printing APIs (Application Programming Interfaces). Without such abstractions every software application on the shelves of CompUSA that prints would have to incorporate software to deal directly with every printer. Instead, the application can "print" to one API, and the printer manufacturer can provide a driver for the API that can be used by any application.

Standardization of communication protocols is also critical. Using a commonly agreed upon protocol makes it possible for the printer manufacturer to leverage existing printer drivers that users might already have installed on their computers. The abstraction and how one communicates with it are both required in practice. Requiring that software applications interact with abstract APIs and protocols also allows the systems integrator to insert control panels between the application code and the wire protocol that allow the user to select a particular printer for a particular purpose. By analogy it illustrates the level of detail necessary to provide interoperability for learning technology.

The second analogy illustrates the scale and scope of interoperability that is necessary for learning environments to set the context for and support learning interactions. Consider the interoperability that is necessary to convene a meeting of people from geographically disperse settings. There is interoperability among networks, operating systems, and applications, transportation infrastructures and airline and rental car companies, hotel and catering companies, and a variety of service concerns required for individuals to convene and conduct an ordinary face-to-face meeting.

Learners and others using a learning environment constitute a virtual version of such a complex of interoperable activities, products, and services exemplified by the printing and meeting analogies. In other words, learning technology must meet enterprise and business model requirements of interoperability, as well as enable interactions among all stakeholders and routine activities involved in learning interactions.

Measuring Interoperability
Judgments about the level and type of interoperability that is required by a given application or provided by a given functional component currently are based on opinion, rather than on measures of demonstrable properties. Such judgments amount to "I know it when I see it." If they're valid, they may serve to focus inquiry on examples or comparisons between examples from which systematic measures could be developed. These judgments are not facts and don't provide an adequate basis for making decisions or for assessing technical progress.

The fundamental requirements for assessing interoperability are one or more relevant dimensions of measurement and some replicable measure along those dimensions. The operative adage "If it can't be measured, it can't be improved" captures a basic scientific method. Both relevance and replicability are important. That measurements along some dimension can be replicated or that some dimension is relevant is not sufficient. For example, the number of lines of code in a file is easy to count and recount but (in itself) is not likely to differentiate levels of interoperability.

ALTI-Meter Data and Displays
We propose to develop a "meter" for measuring advances in learning technology interoperability—an "ALTI-meter" (Advancing Learning Technology Interoperability-meter). This proposal is intended to start a systematic discussion of how to quantify interoperability, not to suggest a conclusive means for doing so.

Developing an ALTI-meter will require rough consensus about which dimensions of measurement are relevant to interoperability, and which measures along those dimensions differentiate levels of interoperability. ALTI-meter read-outs should serve for comparing required interoperability with provided interoperability. That is, they should provide a basis for meaningful dialogue between users and developers.

For the sake of simplicity and utility, we propose to use standard spreadsheets for recording data, and the so-called "radar" chart illustrated in Figure 2 for displaying the data for analysis and decision-making.

Figure 2: A sample ALTI-meter readout. The axes represent dimensions of interoperability. The scale on each axis is the metric of the particular interoperability dimension.

Functional Needs
Taking a "user perspective" on interoperability focuses us not only on considerations that are necessary for developers, but also on those for delivering learning technology, tools, and applications as useful educational services in everyday enterprise environments. This user perspective on requirements also brings forward issues of substance and scale that may be only extrinsic considerations for a demonstration or a stand-alone system, but must be addressed to provide adoptable learning technology.

We are proposing that the community collaborate to identify dimensions of measurement and measurement criteria for an ALTI-meter to explore and evaluate interoperability systematically. The ALTI-meter is, of course, a device to focus our thinking. It is also an intellectual tool needed to approach the problem. The creative energy and thought needed to describe it will move us to the next generation of learning technologies.

comments powered by Disqus