XCRI Knowledge Base

Setting standards for more effective courses information management

  • Increase font size
  • Default font size
  • Decrease font size

All About XCRI

The XCRI community blog: all about eXchanging Course Related Information (XCRI) and its Course Advertising Profile (XCRI-CAP).

Subscribe to feed Latest Entries
alanepaull

Consuming XCRI-CAP IV: Trainagain

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Tuesday, 02 April 2013 General 0 Comments

This post is a summary of the 'consuming XCRI-CAP' work I did in relation to the Trainagain service, a fairly straightforward SQL Server based system that contained low thousands of short course information for use in a searchable website.

The purpose of the Trainagain database was to provide searchable data about short training events. It was not optimised for courses in the XCRI-CAP sense, though it could hold such data. Some differences included 'event'; in Trainagain this was a 'short course' offered by a provider (usually an FE college or private training provider) held at a single venue. So XCRI-CAP presentations are all 'events' for Trainagain.

Most of the courses data in the Trainagain database is held in a single table called Event. There are links to tables for Country (in the world), Area in the UK (primarily counties), Category (very limited reference data about the subject of study) and EventType (4 values: 'Course', 'Network', 'Apprenticeship', 'Consultancy'). There are other tables not relevant to this work. The Event table has a good number of fields that matched with the XCRI-CAP requirements. These included conventional items such as start and end dates, duration, venue information, title and so on, and also some larger descriptive fields such as a general summary, eligibility and outcome.

Trainagain has some mandatory requirements for data that must be included in records in the Event table. These data items are reference data, as specified in the relationships described above for Country, Area, Region, Category and EventType.

The overall approach I took was iterative, in order to determine what the practical difficulties were and their solutions - by no means production-level! Starting with one institution's data, I mapped XCRI-CAP to the Trainagain structure using Altova MapForce. This tool generates sql statements that can be used to import data into the database, showing error messages on failure. The error messages were used to formulate successive new mappings and sub-processes, until success was achieved with the first institution's data. Having recorded the mapping and process, these were then used to import the other institutions' data, revealing a small number of additional difficulties, owing to differences in the XCRI-CAP content. For the last two institutions, relatively few difficulties were encountered.

A difficulty was the requirement for pre-population of reference data on records or SQL Server would refuse to load them. Reference data included Trainagain's existing esoteric (but short) subject list. As the latter only had very limited utility, I replaced this with the LDCS classification, and each course record was necessarily classified with the LDCS codes. Other reference data imported was for geographical data related to the venues, namely Country (UK), Region (EU region) and Area (county). For EventType default reference data values were selected on import (for most XCRI-CAP data this was, unsurprisingly, 'Course'). For simplicity the geographical reference data was loaded just for the relevant areas, using a manual lookup of appropriate values. The reference data tables could be populated using whole UK data sets easily.

The main data was loaded into the database using MapForce's sql routines. Trainagain uses auto-number for Region and Area identifiers in the reference data tables, so these were mapped across to the main XCRI-CAP data using the reference data tables as lookups. Country defaults to the UK identifier.

No insuperable problems were encountered, though I did find the following issues:

1. EventIDs were integers, so XCRI-CAP internalID values could not be used, as they are strings and may contain letters; an autonumber was used instead. This would be a stumbling block for any updating mechanism using partial files, as the XCRI-CAP identifiers could not be used to uniquely identify changed records.

2. The FullPrice field has to be an integer, so, where no value was available, the value '0' had to be used, which is incorrect.

3. Similarly, PlacesAvailable required an integer value, so if not available, the value '0' was used – potentially very misleading, as the course might be shown as having no places – perhaps a better default could be implemented.

4. Description, Eligibility and Outcome fields have character limits of 1,024, so the data might be truncated (Course Data Programme data definitions have 4,000 character limits, as a contrast).

This work did not represent production level aggregation, but was a series of trials to investigate the problems of aggregating data into an existing system.

Likely generic difficulties, compared to a web-based data entry system, that could be extrapolated from the work were:

  • A requirement to include appropriate mandatory reference data in the data to be imported. Whether this should be within the XCRI-CAP feed is moot; for Trainagain it must be in the final data for loading into the database, so some pre-processing is needed.
  • Reference data must use a supported vocabulary or classification system. For subjects in Trainagain, this trial used LDCS, requiring extra work with the data. If data is already classified, it might be possible to use mappings between vocabularies, followed by manual checking. Otherwise manual classification from scratch, or transforms from other fields, would be needed.
  • Any manual alterations should be avoided, as these changes will be over-written when a new whole data set is imported. Alternatively an internal delta update process could be implemented to ensure that only genuine changes are made.
  • Consuming XCRI-CAP data requires extra work from the aggregating organisation, over and above a web-based data entry system. However, the amount of work done overall between the aggregating organisation and the producers is reduced significantly once the new data exchange methods are in place. One of the pieces of new work is a reasonable quality mapping from XCRI-CAP to the database structure, including any necessary transformations. Another is a well-designed set of data definitions set out by the consuming organisation for use by the producers. Fortunately, once these data definitions are in place, the producers can create good quality feeds, and then the mapping from XCRI-CAP to the database structure only needs to be done once for the consuming system to cover all the producers.
  • The experience from this work stream has shown that importing data using XCRI-CAP feeds is a practical proposition with an existing system not optimised for XCRI-CAP. Completely automatic loading of data into a live system with no intervention is not the aim; what is needed is a robust process to gather the XCRI-CAP data, carry out any pre-loading processes, including validation, on it, and then load it, with a final check that all is correctly implemented.

Decisions about the processes required will depend on specific issues:

  • Is the architecture push or pull?
  • Does the existing system use reference data? If so, how much, and how can that be included on loading the new data?
  • Will the import be 'whole data sets' or updates?
  • How frequently will the data be refreshed?
  • How much use will be made of the identifiers in XCRI-CAP and how much of other internal identifiers?
  • How will differences between XCRI-CAP data definitions and local data definitions be handled, particularly with regard to size of fields and expectations of blank or null fields?

It's still my view that with robust data definitions, good quality feeds and well-designed processes, it should be straightforward to consume XCRI-CAP data. What is needed is an attention to the details of the data requirements, and how to map and transform the incoming data to meet them. It is also worth bearing in mind that course marketing data is not particularly volatile, so minute-by-minute real-time data exchange is not a requirement; in many cases monthly or quarterly regular updates are sufficient.

 


XCRI-CAP: turn 12 days of keying into 3 hours of checking.

Tags: Untagged
alanepaull

Consuming XCRI-CAP III: Skills Development Scotland

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 06 March 2013 General 0 Comments

Skills Development Scotland has operated a data collection system called PROMT for many years. PROMT is a client application (not browser-based) that sits on your computer and presents you with a series of screens for each course you want to maintain. Each course may have many 'opportunities' (these are the same as XCRI-CAP presentations) with different start dates, visibility windows and other characteristics. Many fields in PROMT have specific requirements for content that make the experience of keying not particularly enjoyable (though it has been improved since first launch).

With OU course marketing information consisting of several hundred courses and over 1,000 opportunities, it was with some relief that we at APS (running 3rd party course marketing information dissemination for The OU) turned to the SDS' Bulk Update facility, using XCRI-CAP 1.1. We had been nervous of using this facility initially, because PROMT data is used not only for the SDS' course search service, but also has a direct link to a student registration and tracking service for ILAs (Independent Learning Accounts; for non-Scottish readers, ILAs continued in Scotland even though they were discontinued for a while south of the border). Students can get ILA funding only for specific types of course, so each course/opportunity has to be approved by Skills Development Scotland. Changes to the course marketing information can result in ILA approval being automatically rescinded (albeit temporarily), which can mean the provider losing student tracking details, and therefore being at risk of losing the student entirely. So naturally we decided to do some careful testing in conjunction with both SDS and our colleagues at The OU's Scottish office.

Fortunately we discovered that when we uploaded opportunities the system added them on to existing records, rather than replacing them, so student tracking was unaffected. In addition, individual fields of course records for existing courses was over-written but the records remained active and opportunities were unchanged. These features meant that data integrity was maintained for the opportunity records, and we could always revert to the existing version and delete the new, if necessary.

We were able to load new courses with new opportunities, and also existing courses with new opportunities with no significant problems. The potential ILA difficulty was somewhat reduced, because The OU's information for an individual opportunity does not need to be updated once it has been approved for ILA; our main reason for updating opportunities themselves was to add in fees information, but cost information has to be present before an opportunity can gain ILA approval, so this type of update would not interrupt ILA approval or student tracking.

Owing to requirements for some proprietary data, for example numerical fees information and separate VAT, not everything could be captured through XCRI-CAP. However, using the PROMT interface for checking the data, adding in very small extras and deleting duplicated opportunities was comparatively light work, as the mass of it was handled by the XCRI-CAP import.

Strikingly good parts of our Bulk Update process (apart from the obvious vast reduction in keying time):

  • Use of a vocabulary for qualification type in PROMT. This made it easy to use various rules to map from The OU data to the required qualification grouping. These rules included a close examination of the content of the qualification title in the XCRI-CAP data to make sure we mapped to the correct values.
  • For some elements, use of standardised boilerplate text in specific circumstances, again identified by business rules.
  • Good reporting back from the SDS Bulk Update system on the status (and errors) from the import. This included an online status report showing how many records of each type had been successfully uploaded, with date and time, after a few minutes from the time of loading.
  • The system permits us to download the whole data set (well, technically as much as could be mapped) in XCRI-CAP 1.1 format, so we were able to compare the whole new set of records with what we expected to have.
  • The ability to review the new data in the PROMT client interface within minutes of the Bulk Upload. This gives a great reassurance that nothing's gone wrong, and it permits rapid checking and small tweaks if necessary.

I see this combination of bulk upload with a client or web-based edit and review interface as an excellent solution to course marketing information collection. This push method of data synchronisation has the advantage of maintaining the provider's control of the supply, and it still permits fine-tuning, checking and manual editing if that is necessary. In contrast a fully automatic 'pull' version might leave the provider out of the loop - not knowing either whether the data has been updated, or whether any mistakes have been made. This is particularly important in cases where the collector is unfamiliar with the provider's data.


XCRI-CAP: turn 12 days of keying into 3 hours of checking.

Tags: Untagged
Hits: 12623 Continue reading →
alanepaull

Consuming XCRI-CAP II: XCRI eXchange Platform (XXP)

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 25 February 2013 General 0 Comments

XXP experiences

Since I helped to specify the XCRI eXchange Platform, and I'm currently seeking more institutions to use it, I do have an interest. However, I don't do the very techie, database development or systems development work on it, so I'm more a very experienced user and partially designer.

The purpose of XXP is to provide an XCRI-CAP service platform, so it has facilities for loading XCRI-CAP data, though not yet fully automatic ones. The platform has been designed specifically for XCRI-CAP, so its main functions are to provide input and output services that are likely to be relevant to the community. For example, it has CPD and Part Time course data entry facilities, enabling providers to key and maintain these types of course very easily, with vocabularies optimised for the purpose. There is also a CSV loader for those who can output CSV but not XCRI-CAP - this effectively provides a conversion from CSV to XCRI-CAP 1.2, because like all the XXP services, loading in the data enables creation of output XCRI-CAP feeds (both SOAP and RESTful).

Importantly XXP has a feed register (discovered by our Cottage Labs colleagues for their Course Data Programme demonstrator project), so that you can discover where the feed is, who's responsible for it, what it covers and so on.

XXP is defined by the input and output requirements that APS and Ingenius Solutions have currently provided in response to their perception of market demand. This necessarily changes as more institutions get their data sorted out. While the focus in XXP is on acting as an agent for a provider (a university or college), XXP is effectively an interface between the provider and other aggregating organisations. It enables the creation of 'value-added' feeds enhanced by extra data (such as addition of vocabularies, like those for course type, or subject) and by transformation of data (typically concatenating or splitting text fields, or mapping from one classification system or vocabulary to another).

Getting XCRI-CAP data into XXP is at the moment not completely automatic. The main routines are through a manual load - which is fairly time consuming - or through an automatic CSV load (data2XCRI service), requiring a CSV file. In fact (and somewhat bizarrely) it's not difficult to produce the CSV file from an existing XCRI-CAP file, then load it in. This is a stopgap measure till XXP has a fully functioning XCRI-CAP loader.

My use of the XXP consumption of XCRI-CAP at the moment has been using a push method - I stay in control of the operation and can make sure it all works as expected. XXP has a straightforward read-only View function so you can see the data in the system when loaded. If changes need to be made, then you make them at source (upstream); if there was an edit function for the XXP-loaded data, you would wipe out changes when you next loaded the data in.

As the data content going into XXP is controlled directly by the provider, XXP imports whole data sets, not updates. This simplifies the process considerably on both sides, which can focus entirely on live complete data sets. Maybe this needs a bit more explanation. I figure that if the provider controls the data, then the current data in XXP won't have been 'enhanced' by manual edits or upgraded data. Therefore, it's safe to completely overwrite all the data for the provider - that won't wipe out anything useful that we're not going to add back in. This is in contrast to 'delta update' methods that compare old and new data sets and just pump in the changed material. It's much simpler, which has some merit.

Some of the difficulties that had to be overcome in the XXP aggregation:

  • Use of URLs as internal identifiers (ie inside XXP) for linking courses and presentations - this is overcome either by using a new-minted internal identifier or by re-constructing it (keeping the unique right-hand part).
  • On-the-fly refinements using xsi:type - this is a technical problem as many tools don't like (read: tolerate) xsi:type constructions, or indeed any type of redefinitions, extensions or restrictions. This requires workarounds for or at least careful handling of extended types.
  • Non-normalised material in XCRI-CAP structures. For example, is nested in presentations, therefore repeated. As the XCRI-CAP is parsed, you may find new venues or repeated venues that need to be processed. Ideally all venues should be processed prior to course>presentation structures, so it may be best to pass once through the file to discover all the venues, then a second time to populate the rest.
  • Incomplete bits. For example, the venues referred to in the previous bullet may simply have a title and postcode. XXP has a facility for adding missing data to venues, so that the output XCRI-CAP feed can be more complete.
  • Matching of vocabularies. Some feeds may use JACS, others may use LDCS, others simply keywords, and yet all the data goes into a subject field - this requires a method to store the name of classification and version number (JACS 1.7, 2 and 3 are substantially different).

A substantial advantage of XXP is that once you've put the data in (in whatever format), you can get it out very easily - currently as XCRI-CAP SOAP and RESTful getCourses, but there's no reason why other APIs couldn't be added for JSON, HTML, RDF and so on. This effectively means that XXP can have mapping and transformation services into and out of XCRI-CAP, adding value for particular 'flavours' or for new versions.


XCRI-CAP: turn 12 days of keying into 3 hours of checking.

Tags: Untagged
alanepaull

Consuming XCRI-CAP I

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 21 February 2013 General 0 Comments

This post and a few later ones will be some musings on my experiences of how XCRI-CAP is or might be consumed by aggregating organisations and services. I'll not go into the theoretical models of how it could be done, but I'll touch on the practicalities from my perspective. Which, I admit, is not as a 'proper' technical expert: I don't write programmes other than the occasional simplistic perl script, neither do I build or manage database systems, other than very simple demonstrators in MS Access, and I dabble in MySQL and SQL Server only through the simplest of front end tools.

My main XCRI-CAP consuming efforts have been with four systems: XXP, Trainagain, Skills Development Scotland's Bulk Import Facility and K-Int's Course Data Programme XCRI-CAP Aggregator.

XXP characteristics

  • Collaborative working between APS (my company) and Ingenius Solutions in Bristol
  • Service platform for multiple extra services, including provider and feed register (for discovery of feeds), AX-S subject search facility, CSV to XCRI converter, web form data capture, getCourses feed outputs (SOAP and RESTful)
  • Doesn't yet have an auto-loader for XCRI-CAP. We can load manually or via our CSV to XCRI facility.

Trainagain characteristics

  • Existing system with its own established table structure, its own reference data and own courses data
  • SQL Server technology
  • I have off-line 'sandbox' version for playing around with.

Skills Development Scotland Bulk Import Facility characteristics

  • XCRI-CAP 1.1 not 1.2
  • Existing live XCRI-CAP aggregation service (push architecture)
  • Works in conjunction with the PROMT data entry system

K-Int XCRI-CAP Aggregator characteristics

  • Built on existing Open Data Aggregator, a generalised XML consuming service.
  • Takes a 'relaxed' view to validation - not well-formed data can be imported.
  • Outputs JSON, XML and HTML. But not XCRI-CAP.

These are early days for data aggregation using XCRI-CAP. There's been a chicken-and-egg situation for a while. Aggregating organisations won't readily invest in facilities to consume XCRI-CAP feeds until a large number of feeds exist, while HEIs don't see the need for a feed if no-one is ready to consume them. The Course Data Programme takes the second one of these (I guess that's the egg??) problems - if we have 63 XCRI-CAP feeds, then we should have a critical mass to provoke aggregating organisations to consume them.

Some of the questions around consumption of XCRI-CAP feeds centre on technical architecture issues (Push or Pull?), what type of feed to publish (SOAP, RESTful, just a file?), how often should the feed be updated and / or consumed (real-time updating? weekly?, quarterly? annually? Whenever stuff changes?), how do the feed owners know who's using it? (open access v improper usage, copyright and licencing). Some of these issues are inter-related, and there are other practical issues around consuming feeds for existing services - ensuring that reference data is taken into account, for example.

I'll try to tease out my impressions of the practicalities of consuming XCRI-CAP in various ways over the next few blog posts.


XCRI-CAP: turn 12 days of keying into 3 hours of checking.

Tags: Untagged
alanepaull

What's the point of XCRI-CAP?

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 14 February 2013 General 0 Comments

What's the point of XCRI-CAP? This has been a cry for quite a while, even amongst some of the projects in the JISC funded Course Data Programme. Well, this is a story about how I've found it useful.

Many years ago I left ECCTIS 2000, the UK's largest courses information aggregator and publisher, having been technical lead there for 8 years. Over that period of 8 years, during which we moved our major platform from CD-ROM (remember them?) to the web, we established a state-of-the-art course search system with integrated data picked up from:

  • course marketing information (keyed, classified and QAed by Hobsons Publishing),
  • text files from professional bodies (keyed by them, but marked up by us),
  • advertising copy and images (also keyed by the supplier and marked up by us),
  • subject-based statistics from HESA,
  • vacancy information (at appropriate times of the year) from UCAS,
  • and so on.

We used a new-fangled technology called Standard Generalised Markup Language (SGML) with our own bespoke markup.

The technology allowed us to produce separately versioned searchable products for three flavours of CD-ROM (Scotland, rest of UK, international), the web and for printed publications, all from the same integrated data set. Our system enabled us to aggregate data received from multiple sources, including huge data sets of well-structured text (from Hobsons), quite large statistical sources (HESA), and smaller 'freestyle' text items from advertisers and other organisations that we marked up ourselves. Shades of XCRI-CAP Demonstrator projects, but 20 years ago. ECCTIS 2000 was a major aggregator, and probably *the* major UK courses information aggregator of the time. Our development built on some highly innovative work carried out by The Open University in the 1980s, including seminal developments in CD-ROM technology, but that's another story.

Much of my career to date had been centred on the development of standard methods for managing course marketing information as an aggregator. Quite a bit of my freelance career was to be on the other side of the fence, helping HEIs to manage courses information as providers, though I've always had some involvement in the aggregating organisation field.

APS Ltd, my small company, was fortunate enough to gain a contract from The Open University to act as their agent for disseminating course marketing information to the wider world of the emerging course search sites on the web. The main ones from the OU's viewpoint at that time were the British Council, Graduate Prospects, the learndirect services in the countries of the UK. I also set up, for UCAS, its 'data collection system' through which UCAS obtained the courses data not used in its application system, but supplied on to third parties (such as learndirect, newspapers, Hotcourses and others).

Most of these small acts of data collection and dissemination were carried out by what are now seen as 'traditional' methods: re-keying from prospectuses, keying directly into a supplier's web form. However, in a few cases (not nearly enough in my view) we were able to obtain electronic files from HEIs - for example, as I was managing the OU dissemination and the UCAS data collection input, it seemed sensible to me to provide the data electronically and to import it automatically. No problem.

At that point, it occurred to me that if I could do this for the OU data, why not for many other HEIs? One reason was lack of standards, the other main one was the chaos in course marketing systems (where they existed) in HEIs - understandable as most were desperately trying to come to terms with new internet technologies, particularly websites, and how these related to their paper prospectuses.

My initial solution was to use SGML (XML being a twinkle in someone's eye at that time) to create a 'lowest common denominator' structure and format for courses information, convert data into that format, then write a suite of programmes to create bespoke outputs for course information aggregrating organisations. There ensued a 'happy time' of 3 to 4 years during which we would acquire the OU data in a convenient database format, carry out a swathe of well-documented software-driven and mainly automatic processes, produce a range of output files (Access databases, spreadsheets, CSV files) and fling them around the country for up to ten or so aggregating organisations to import. For learndirect Scotland, to take just one example, we would produce a series of CSV files, email them off and they would load them into their database. Time taken: maybe 5 minutes for the automatic processing, 30 minutes for checking.

OU Course Data Converter Suite
OU Course Data Converter Suite

I stress here that our supply of OU data to learndirect Scotland before 2007 took us about 35 minutes, 90% of that simply checking the data. We would supply updates five times per year, so our total annual time specifically on the learndirect Scotland update would have been significantly less than half a day. However, in a re-organisation, learndirect Scotland disappeared, and in place of their system that imported the OU data, the replacement organisation implemented a new one called PROMT. Ironically, this new system was anything but, from our perspective. With no import mechanism, we were required to key everything from scratch into their bespoke and somewhat eccentric client software - our task went from 35 minutes to 2 to 3 days (the OU had over 1,200 presentations), and the annual task leapt from less than half a day to about 12 days. A double irony: behind their clunky client software was XML and various other interoperability techniques, completely unavailable to those supplying the data.

This was the situation in 2007, and our 'happy time' ended, as everyone rapidly stopped taking bulk updates and moved to the 'easier' method of forcing HEIs to re-key their data into bespoke web forms. Our time to update the OU data more than doubled - so much for new technology! There was much grinding of teeth (and not just from APS, but from colleagues across the sector).

By now, you should be able to see where I'm coming from in respect of XCRI-CAP.

So, what's the point of XCRI-CAP? My final illustration: Skills Development Scotland has now done us proud. In addition to their PROMT software (now improved over the years), they have set up an excellent bulk import facility for providers to use to supply XCRI-CAP 1.0 or 1.1 data (and I'm sure we can persuade them to use 1.2 soon). APS is now using this facility, coupled with The Open University's XCRI-CAP 1.1 feed, to get back to our 'happy time' again; only better, because now everyone can have 'happy times' if more aggregators use XCRI-CAP.

XCRI-CAP: turn 12 days of keying into 3 hours of checking.

--------------------------------------------------------------

APS has also produced a 'value added' XCRI-CAP 1.2 feed for any aggregator to use: http://www.alanpaull.co.uk/OpenUniversityXCRI-CAP1-2.xml. As we are able to tweak this feed in response to specific aggregator requirements, please get in contact with  This e-mail address is being protected from spambots. You need JavaScript enabled to view it , if you would like to use this feed, or to discuss how APS might help you with your courses information dissemination. We also have a range of services through the XXP Platform.

Tags: Untagged
alanepaull

Typing woes

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 04 February 2013 data specification 1 Comment

How do we use XCRI-CAP to enable feed consumers to filter out the course records they want from those they don't want? A fundamental question that was asked when we first started to design XCRI back in the day. This post, and a reiteration of that question as a starting point, was stimulated by Qamar Zaman's blog post "XCRI and Qualification" (https://atiqam.wordpress.com/xcri-and-qualification/).

XCRI-CAP 1.2 has many features that permit filtering in theory. These include at course level: subject, type and credit >> level; at presentation level: age, duration, studyMode, attendanceMode, attendancePattern; at qualification level: educationLevel and type. These elements were deliberately included in order to help with filtering, both for consumption of feeds and for search.

However, XCRI-CAP is primarily a structural specification - it specifies, for example, that if you have a course title, this is where you should put it. The spec itself doesn't prescribe the content of elements, except for some suggestions (not binding) for studyMode, attendanceMode and attendancePattern. That's why we have a Data Definitions document for the Course Data Programme, and even that is to an extent loosely specified - and could do with tightening up, once we have agreement on the content. For machine-readability this is not ideal, but it has helped to enable many organisations to produce an XCRI-CAP feed, and we already have some aggregation taking place, and some services.

When XCRI-CAP was designed, there were very few generally accepted vocabularies for key information items that enable divvying up of the data. Therefore, the designers were loathe to include them in the spec, as that could easily have restricted its take-up by negating potential use cases. On the other hand, producers of XCRI-CAP feeds need to know what XCRI-CAP feed consumers want inside many of these elements, so that the data can be filtered and consumed with a minimum of unnecessary intervention. This is the reason why the emergence of communities of practice within the Course Data Programme (for example, around Graduate Prospects, and the Creative Assembly, and later UCAS) has been so encouraging and so important.

As I've mentioned, XCRI-CAP 1.2 does include several data elements that can help, if populated with agreed vocabularies. Some are fairly well specified, such as and , while others (I'm looking at in all its forms as an example) less so. We can enumerate qualification type through various well-established frameworks (for example NQF or QCF). We have JACS and other vocabularies for subjects, and we have suggested vocabularies for studyMode, attendanceMode and attendancePattern.

The purpose of the element in course is to provide a filtering mechanism not already covered by elements such as studyMode, subject, qualification or educationLevel. An archetypal "type" is 'continuing professional education' courses; these cannot be readily extracted using existing elements, because they typically carry no credit or level, you cannot pick them out with just a subject vocabulary, duration or other easy descriptor without analysing free text descriptions. It also seems to me not unreasonable that an aggregator might want to pull out CPD courses (in fact we already have two specific cases of this). This is not an isolated use case. Consider perhaps 'Open Learning' courses, or 'Continuing Education', or even 'Undergraduate' - under-specified for level in most frameworks, or 'Postgraduate Taught' and 'Postgraduate Research'. The current state of XCRI-CAP design does not permit, without more vocabularies, these groups of courses to be filtered easily.

I think we legitimately have several axes (pl. of axis and pl. of axe!) here with which to slice up course provision, independent of educationLevel, qualification abbreviation, studyMode, subject and others explicitly defined:

  • qualification type: For example - 'GCSE or Equivalent', 'Foundation Degree', 'Postgraduate Qualification' [As an aside, importantly, I note that there is an error in the Data Definitions: there *should* be an element for qualification type; it's in the schemas but not the data definitions. This may help, as there are some useful qualtype vocabularies around that don't necessarily equate simply to 'level'.]
  • course type (inter-institution context): For example 'Continuing Professional Development', 'Open Learning', 'Continuing Education', 'Undergraduate', 'Postgraduate Taught', 'Postgraduate Research', 'Summer School', 'Researcher Training'.
  • course type (structural component type within an HEIs offerings): For example - 'module', 'programme', 'stage'; as used in the HEAR XML specification
  • course type (community practice): see CPD community practice below.
  • module / programme relations: could use hasPart / isPartOf, coupled with the structural component type (again as in the HEAR XML specification)

A typical usage in the CPD community might be:

This example is already implemented in the Course Data Programme schema, and the vocabulary is published as a VDEX file here: https://xcri.co.uk/vocabularies/courseTypeCPD1_0.xml.

My view is that communities of practice will have specific requirements for these vocabularies, which will be different in different communities. For example, Graduate Prospects may well wish to split up PG courses into different types, not necessarily linked directly to educationLevel; some "course type" terms might look like educationLevel terms, but they are being used in a different course-based context. For PG courses, you might want to identify CPD, Taught, and Research courses and perhaps researcher training, and these terms might be sufficient in the element. [Bearing in mind that multiple elements are permitted, and we can use xsi:type to prescribe and validate vocabularies.]

So there are many ways to slice and dice chunks of course provision, and XCRI-CAP 1.2 has elements that can enable this. We collectively need to determine what chunks need identification over and above the reasonably well specified stuff like subject, educationLevel and study mode. We can implement multiple vocabulary elements if required - a type vocab doesn't have to have mutually exclusive terms. And in my experience starting with agreement on a small number of terms is better than trying to get to an all-encompassing vocabulary before using it.

Here's 3, for a small start.

  • 'Continuing Professional Development'
  • 'Undergraduate'
  • 'Postgraduate'

Implement with the following XML:


And also some components:

  • 'Programme'
  • 'Pathway'
  • 'Stage'
  • 'Year'
  • 'Module'

Implement with the the following XML:


These will validate against the Course Data Programme schema, but should be considered a pilot implementation.

 

Tags: Untagged
Hits: 18557 Continue reading →

Recent comment in this post Show all comments

alanepaull

Posters and Presentations

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 31 January 2013 General 1 Comment

No, this isn't a weird Dungeons and Dragons clone, it's about the Jisc Course Data Programme 'Show and Tell' on 29 Jan at Aston University. This day-long conference was for projects to share what they'd done before the funding for the Course Data Programme runs out (March 2013). And there was multitudinous sharing! We had a keynote from Professor Mark Stubbs (the grandaddy of XCRI-CAP), excellent synthesizing from Gill Ferrell, sizzling lightning talks from projects and demonstration services, discussions galore across the themes of institutional course management, getting ready for better data integration, techies' corner, and XCRI-CAP enabled services, as well as over fifty beautiful project posters. The day was rounded off with a Q&A panel of experts (and me!), during which both Graduate Prospects and UCAS were able to re-iterate their support for XCRI-CAP aggregation - always a good sign to get national approvals.

My own involvement was primarily as a member of the XCRI Support Team, together with my colleagues Kirstie Coolin, Geoff Ramshaw, Roger Clark and Craig Hawker. I gave a lightning talk - less than 5 minutes, but rather longer in prep time - on the demonstrator that APS has produced alongside Ingenius Solutions: Advanced XCRI-CAP Search Widget. This little piece of code for websites gives 'best of breed' subject searching using synchronised XCRI-CAP data, a specially designed thesaurus, and a cunning algorithm. We're now hoping that many others will want to re-use our method - and we have interest from the Creative Assembly already, so let the collaborations continue... they've already begun.

Each of the demonstrator projects gave succinct and stimulating lightning talks, topped off at the end by George from Middlesex University in pirate's hat and pistol to demo the MUSKET tools - you certainly couldn't miss his team. MUSKET and its sister project MUSAPI are providing interoperable data services for sophisticated course content comparison, and for linking up academic subjects with job profiles and job opportunities. Fortunately for me, Rob Englebright is looking at the demonstrators in some detail on the JISC eLearning Blog, so I don't need to go through them here.

The Creative Assembly - Arts UC Bournemouth, Courtauld Institute, Falmouth Uni and Plymouth College of Art - was probably the highlight of the show for me, epitomizing so much of what we're trying to achieve: They've not only improved their own processes for producing course marketing information, but also collaborated on a range of common solutions to common problems (Drupal modules for example), they aggregate their marketing information and are building a brand new web portal for learners in their niche market. Elaine Garcia and the team did an excellent job, and Falmouth placed first in the poster competition too.

I also chaired the discussions in the afternoon session for Theme 1: institutional course information management, for which we had an excellent turnout. After 45 minutes or so of lightning talks, the floor was open for questions and issues. Topics of particular interest included:

  • how granular is the information?
  • can we write Plain English or must we use Academese?
  • there's a problem with versioning that's pretty hard.
  • how can we identify CPD courses?
  • managing this stuff is difficult, and some of our problems are the same across institutions.
  • cultural change is also hard.

However, the situation is not impossible. As Gill Ferrell said in her synthesis comments: "Opening Pandora's Box also released Hope", and part of that hope is XCRI.

Though we've still got a long way to go to embed XCRI-CAP into the HE landscape, the Show and Tell generated a huge amount of enthusiasm, and it's obvious that many more people now 'get it'.

Tags: Untagged
Hits: 16765 Continue reading →

Recent comment in this post Show all comments

alanepaull

Are you SITSing comfortably?

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 07 January 2013 General 0 Comments

I've been musing for some while now on the SITS Module and Course Collaboration meeting in November, arranged by colleagues at Cranfield University and the University of Wolverhampton. The latter has implemented a Module Approval system using SITS Process Manager, and their approach had several particularly interesting characteristics:

  • An insistence that academics must deliver what's been validated and what students have been told about, rather than permitting on-the-fly variations.
  • Academics are asked to write information for the student audience (not for validation processes) - this required some training.
  • A primary purpose of writing information was to enable it to be re-used.
  • Everyone has access to everything; nothing is filtered out so it can't be seen.
  • It isn't a 'fits all needs' solution, but it 'does most'.

I think this highlights some particular issues for different circumstances in different institutional cultures.

'Deliver what's validated and what the students have been told about' might seem like a no-brainer. However, practice varies across institutions and even within institutions, and the process of course design (rather than delivery) can be seen as a continuous one with no particular end point. As a board game designer and board game player, I see a parallel here. Game design is also an ongoing process that never finishes, as improvements to the game can always be made. But when playing an instance of the game, it's essential that the players know the rules are fixed, or the game loses its credibility and the players' experience is undermined. Similarly, even if you *want* to improve the instance of a course, changing aspects of the advertised and expected course arrangements or curriculum can undermine the student experience. Sitting on your hands and waiting till the next iteration might be a better approach, but does the academic culture or common practice support this approach?

'Writing for the student audience' and re-use of information are key aspects of maximising the advantage of process improvement and standardisation using XCRI-CAP, I feel. Implementation of this type of change may be difficult, especially in a heavily decentralised institution, because it entails engagement of the whole academic community and perhaps a change in the culture not only of how to write courses information, but also in the freedom that individuals perceive they ought to have in creating the materials. This is a good example of how an information management process can have a potentially far-reaching impact on culture.

'Everyone has access to everything'. Everyone knows that access to information is a power-based concept. This may be a particularly high hurdle for some institutions, but if visibility is poor, then process inefficiencies and potentially quality-destroying workarounds or breaches of regulations and guidelines, can be concealed. In many revisions of validation and approval processes, there is a tension between the perceived flexibility of 'free form' manual processes (even though they may take a long time) and the perceived inflexibility of digital ones (even though they may be quicker). However, these perceptions often hide the complexity of existing manual methods and cloud the 'business rules' that are supposed to be applied. Cultural change may be necessary, so that staff actually adhere to methods, time scales, and detailed procedures that have been formally promulgated in the past, but not necessarily fully  adhered to in the present. Processes supported by digital technologies should model the agreed business rules, such that flexibility and inflexibility are reflections of the agreed processes. I suspect that this is the core technical challenge of process improvement here.

The final bullet is also important. It's unlikely that the nirvana of a perfect solution will be reached by process improvement and associated cultural change. Expectations have to be managed. Change must be an improvement on existing methods, but each person has to be sufficiently involved in and engaged with the proposed changes that their understanding of the change process itself enables that individual to realise the limitations of the changes. And oft-times the new processes must be able to cope with, or support, valid exceptions and complexity.

Tags: Untagged
Hits: 29282 Continue reading →
alanepaull

Perils of typing

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Friday, 30 November 2012 data specification 0 Comments
"With some trepidation" is how I started my recent email to the CourseDataStage1 mailing list, as I asked for comments on a suggestion about a vocabulary for course 'type'. We have an ongoing robust discussion.
Tags: Untagged
Hits: 23254 Continue reading →
alanepaull

The ordering of XCRI-CAP data items

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 18 July 2012 data specification 1 Comment

A question just arose from a project about the correct ordering of XCRI-CAP data items; it's required by the validator, but is not actually part of the spec. So I had a few thoughts about that.

Although ordering of data items is not part of the XCRI-CAP 1.2 specification, it *is* part of the XCRI-CAP 1.2 XML schema. I can therefore confirm that the order of the data elements in XCRI-CAP feeds is important!

The specification, as published on the wiki at xcri.org/wiki, is a formal description of XCRI-CAP data structures and data items. It can be implemented and coded in many different ways, for example through a W3C schema (as we have used in Craig's validator and published at https://xcri.co.uk/bindings/xcri_cap_1_2.xsd), through an RDF schema, through schematron or through JSON. Each of these implementation formats is generally referred to as a 'binding', and each can meet the requirements of the specification. However, as they use different implementation technologies, they are only directly inter-changeable if a developer produces something to convert one implementation format into another - in some cases there may be off-the-shelf ways of doing this, possibly.

The XCRI-CAP community has mostly used a W3C XML schema approach, and we've re-used some linked schemas (Dublin Core for example) to save re-designing common data elements, like 'title' or 'identifier'. We have also implemented the XCRI-CAP schema, so that it is compliant with the European Norm, Metadata for Learning Opportunities (MLO).

It's interesting to note that The Standard is the spec, not the schema - the schema is an implementation of The Standard. And some standards don't have bindings yet - for example, there isn't an 'official' MLO binding, although Scott and I have implemented one in order to use it with XCRI-CAP. This gives us namespace problems, because without a binding, we can't use a 'real' namespace. So, for example, re-using the JACS coding system in a transparent way is problematic - it has no official interoperable binding, so no namespace. Also some standards don't have 'official' bindings, even if they have namespaces - Dublin Core, for example, though I admit that we've tended to use the published DC schemas as if they are standards.

There are other ways of doing this - in fact, I have a different XCRI-CAP 1.2 implementation that just puts all the XCRI-CAP items in one schema, thereby avoiding any namespace difficulties. However, this is not the 'official' schema. If anyone wants that, just let me know.

Alan

Tags: Untagged
Hits: 32837 Continue reading →

Recent comment in this post Show all comments

alanepaull

Give 'em good data (or they'll just use the bad stuff)

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 04 July 2012 data specification 0 Comments
I've been looking at the Course Data Programme interim reports recently, as part of my role in the XCRI Support Project. One question that has arisen ...
Tags: Untagged
Hits: 48862 Continue reading →
alanepaull

LRMI, schema.org and XCRI-CAP

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 14 June 2012 data specification 0 Comments
It's looking like there might be possibilities for linking the Learning Resources Metadata Initiative (LRMI) and XCRI-CAP, in some way. LRMI is part o...
Tags: Untagged
Hits: 94138 Continue reading →
alanepaull

Data visualisation and advanced MUSKETry

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 28 March 2012 data specification 0 Comments
One of Jamie Mahoney's ON Course blog posts suggested this train of thought. It was started by a quote from Edward Tufte’s 'Visual Display of Quantita...
Tags: Untagged
Hits: 42159 Continue reading →
alanepaull

Part time pre-Ellumination

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 14 March 2012 data specification 0 Comments
We have a second Elluminate session on XCRI-CAP data definitions and vocabularies this afternoon, so naturally I've been ruminating on the subject. So...
Tags: Untagged
Hits: 39208 Continue reading →
Craig Hawker

Testing, testing and more testing!

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Wednesday, 07 March 2012 validation 0 Comments
This is the sixth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visi...
Tags: Untagged
Hits: 24621 Continue reading →
alanepaull

Round 2 of data definitions continues

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 05 March 2012 data specification 0 Comments
I've been amending v1.6 of the Data Definitions document and v2.0 of the Vocabulary Framework for a few days now. Rather more has come up than expecte...
Tags: Untagged
Hits: 66312 Continue reading →
Craig Hawker

A retrospective on feedback so far

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Tuesday, 28 February 2012 validation 0 Comments
This is the fifth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visi...
Tags: Untagged
Hits: 35845 Continue reading →
alanepaull

BS 8581 and all that jazz

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Tuesday, 21 February 2012 data specification 0 Comments
BS 8581-1 is the catchy name for the emerging British standard that is / was XCRI-CAP. Members of BSI committee IST/43 (trips of the tongue, no?) have...
Tags: Untagged
Hits: 65283 Continue reading →
Craig Hawker

Vocabularies and validation

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Monday, 20 February 2012 validation 0 Comments
This is the fourth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by vis...
Tags: Untagged
Hits: 43723 Continue reading →
alanepaull

Vocabulary Framework

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 15 February 2012 data specification 0 Comments
I've now at last published the draft Vocabulary Framework Document for the Course Data Programme: you can get it at https://xcri.co.uk/KbLibrary/Co...
Tags: Untagged
Hits: 63222 Continue reading →
Craig Hawker

Validation: a first public look

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Monday, 13 February 2012 validation 0 Comments
This is the fourth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator.  The entire series can be found by vis...
Tags: Untagged
Hits: 40119 Continue reading →
Craig Hawker

State of the Nation

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Monday, 06 February 2012 validation 0 Comments
This is the third in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator.  The entire series can be found by visi...
Tags: Untagged
Hits: 68697 Continue reading →
Craig Hawker

Validation Library structure

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Monday, 30 January 2012 validation 0 Comments
This is the second in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator.  The entire series can be found by vis...
Tags: validation
Hits: 68808 Continue reading →
alanepaull

Straw man

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 25 January 2012 data specification 0 Comments
Well, the first public draft of the straw man data definitions document has been published (Version 1.6 of Data Definitions). It's rather longer than ...
Tags: Untagged
Hits: 57573 Continue reading →
Craig Hawker

XCRI-CAP 1.2 validation - the first steps

by Craig Hawker
Craig Hawker
Craig Hawker is a software developer from Northampton who has been involved with
User is currently offline
on Monday, 23 January 2012 validation 0 Comments
This is the first in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator.  My aim is to post a new blog post ever...
Tags: validation
Hits: 39244 Continue reading →
alanepaull

Working group working

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 18 January 2012 data specification 0 Comments
The working group on data specification and vocabularies has started. Many thanks to the early birds who've already begun to comment on the initial 's...
Hits: 36168 Continue reading →
alanepaull

Welcome to the data specification and vocabularies blog!

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 18 January 2012 data specification 0 Comments
This blog section is to focus on the work we're doing on the data definitions (or specifications, I'm not sure quite which one I prefer) and vocabular...
Tags: Untagged
Hits: 82532 Continue reading →
admin

Welcome to the XCRI-CAP 1.2 validation blog section

by admin
admin
Administrator has not set their biography yet
User is currently offline
on Wednesday, 18 January 2012 validation 0 Comments
Welcome to the XCRI-CAP 1.2 validation blog! The validator will be created by Craig Hawker of the Course Data Consortium. It will be based on the X...
Tags: Untagged
Hits: 81029 Continue reading →
alanepaull

XCRI-CAP and KIS

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Tuesday, 25 October 2011 General 0 Comments
For anyone who's missed it.... Bonnie Ferguson at the University of Kent has written a very useful blog entry about XCRI-CAP and KIS. Worth a read and...
Tags: Untagged
Hits: 63546 Continue reading →
alanepaull

XCRI Knowledge Base - a face lift

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 12 September 2011 General 0 Comments
We've nearly finished giving the XCRI Knowledge Base a face lift. In the summer of this year... Starting again. Earlier this year during the months t...
Tags: Untagged
Hits: 22516 Continue reading →
alanepaull

From Tony Hirst: Several Million Up for Grabs in JISC ‘Course Data’ Call. On the Other Hand…

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 05 September 2011 General 0 Comments
For another perspective on the Course Data programme, see Tony Hirst's blog post http://blog.ouseful.info/2011/09/05/several-million-up-for-grabs-in...
Tags: Untagged
Hits: 54323 Continue reading →
alanepaull

XCRI Self-assessment Framework updating

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Friday, 26 August 2011 General 0 Comments
JISC has approved some work on upgrading the XCRI Self-assessment Framework and the XCRI Knowledge Base prior to the start of Stage 1 of the Course Da...
Hits: 55478 Continue reading →
alanepaull

Consolidated Recommendations Inc.

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 11 August 2011 General 0 Comments
Over the last few days Kirstie and I have been synthesizing the recommendations from the XCRI-CAP Self-assessment Framework Field Testing projects. We...
Tags: Untagged
Hits: 37977 Continue reading →
alanepaull

"CURRICULUM DESIGN: X MARKS THE SPOT?" a CETIS blog post about XCRI by Lou McGill

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Friday, 05 August 2011 General 0 Comments

From the CETIS blog 'Other Voices', Lou McGill considers how institutions connect and manage course information, and the role that XCRI can play.

Tags: Untagged
Hits: 177130 Continue reading →
alanepaull

Data specifications - help please!

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Friday, 29 July 2011 General 0 Comments
Do you have any data specifications or other information about the data content that is included in your XCRI-CAP feeds or files? I’m trying to collect as many data specs as possible for the many organisations that are using XCRI-CAP, so that we can start to construct draft data specifications for 'communities of practice'. This will help to avoid inconsistencies between XCRI-CAP feeds and will make it much easier for aggregators to consume the data efficiently.
If you are able to help, please contact me at This e-mail address is being protected from spambots. You need JavaScript enabled to view it .
Alan Paull
Tags: Untagged
Hits: 31154 Continue reading →
alanepaull

Return of the original XCRI

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Tuesday, 26 July 2011 General 0 Comments
The original XCRI schema, release 1.0, which covered all types of courses information, had been hosted on the eFramework website for many years. This website is no longer available, so the schema has been moved the XCRI Knowledge Base. No background information to the schema is available yet. See https://xcri.co.uk/schemas/xcri_r1.0.xsd.

It's also worth noting here that it's being used successfully in the CUMULUS project.
Tags: Untagged
Hits: 27061 Continue reading →
alanepaull

XCRI eXchange - making progress on the videos

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 13 July 2011 General 0 Comments
Having received the live captured Flash files from colleagues at the Video Production Group in the University of Nottingham, I'm now getting the mater...
Tags: Untagged
Hits: 51187 Continue reading →
alanepaull

It's arrived!

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Monday, 11 July 2011 General 0 Comments
JISC Grant Funding 8/11: JISC ‘Course Data: Making the most of Course Information’ Capital Programme - Call for Letters of Commitment It's now arriv...
Tags: Untagged
Hits: 58082 Continue reading →
alanepaull

XCRI eXchange National Showcase - an initial verdict from Paul Bailey

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 29 June 2011 General 0 Comments
See Paul Bailey's blog entry about Monday's event at Nottingham....
Tags: XCRI-CAP, JISC
Hits: 41253 Continue reading →
Scott Wilson

XCRI: End of the beginning

by Scott Wilson
Scott Wilson
Guest has not set their biography yet
User is currently offline
on Wednesday, 29 June 2011 General 0 Comments
There was a theme developing at the XCRI Assembly in June. An extended period of beta testing and specification development is now drawing to an end: ...
Tags: future, XCRI-CAP, xcri
Hits: 46767 Continue reading →
rob-work

#coursedata: preparing for increased demand

by rob-work
rob-work
Guest has not set their biography yet
User is currently offline
on Friday, 24 June 2011 General 0 Comments
We have grown used to the instant availability of information, and when a swift web search doesn't return the results we need or expect the assumpti...
Hits: 51267 Continue reading →
alanepaull

GetTheData: Data / API FAQs for XCRI-CAP?

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Tuesday, 21 June 2011 General 0 Comments
I always keep an eye on Tony Hirst's OUseful.Info blog, because, well, it's very useful. If you don't know it, try it out. Tony's latest post was cal...
Tags: support
Hits: 61949 Continue reading →
alanepaull

MUSKET Benefits Realisation Workshop - 13 June 2011

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 15 June 2011 General 0 Comments
On Monday last we had the final MUSKET Benefits Realisation Workshop at Middlesex University's Hendon Campus. We had presentations from colleagues fro...
Tags: JISC project
Hits: 116002 Continue reading →
alanepaull

Field testing the XCRI Self-assessment Framework and other resources: Start up meeting

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Thursday, 09 June 2011 General 1 Comment
On Tuesday 7 June we had a virtual meeting to start the 6 projects that are field testing the XCRI Self-assessment Framework and other resources that ...
Hits: 105087 Continue reading →

Recent comment in this post Show all comments

alanepaull

Welcome to All About XCRI!

by alanepaull
alanepaull
I am an information management consultant, running my own 'husband and wife' com
User is currently offline
on Wednesday, 08 June 2011 General 0 Comments
Welcome to our new eXchanging Course Related Information blog - All About XCRI. eXchanging Course Related Information has been happening for deca...
Tags: Untagged
Hits: 30684 Continue reading →

News

Prev Next

The sixteenth issue of the Course Data Programme Stage 2... Read more

The sixteenth issue of the Course Data Programme Stage 2... Read more

The fifteenth issue of the Course Data Programme Stage 2... Read more

The fourteenth issue of the Course Data Programme Stage 2... Read more

The thirteenth issue of the Course Data Programme Stage 2... Read more

The twelfth issue of the Course Data Programme Stage 2... Read more

The eleventh issue of the Course Data Programme Stage 2... Read more

The KIS data has now been launched.  Data from all... Read more

The tenth issue of the Course Data Programme Stage 2... Read more

The ninth issue of the Course Data Programme Stage 2... Read more

Congratulations to all those who have been successful in their... Read more

The eigth issue of the Course Data Programme Stage 2... Read more

XCRI Interactive


JISC