This post and a few later ones will be some musings on my experiences of how XCRI-CAP is or might be consumed by aggregating organisations and services. I'll not go into the theoretical models of how it could be done, but I'll touch on the practicalities from my perspective. Which, I admit, is not as a 'proper' technical expert: I don't write programmes other than the occasional simplistic perl script, neither do I build or manage database systems, other than very simple demonstrators in MS Access, and I dabble in MySQL and SQL Server only through the simplest of front end tools.
My main XCRI-CAP consuming efforts have been with four systems: XXP, Trainagain, Skills Development Scotland's Bulk Import Facility and K-Int's Course Data Programme XCRI-CAP Aggregator.
XXP characteristics
- Collaborative working between APS (my company) and Ingenius Solutions in Bristol
- Service platform for multiple extra services, including provider and feed register (for discovery of feeds), AX-S subject search facility, CSV to XCRI converter, web form data capture, getCourses feed outputs (SOAP and RESTful)
- Doesn't yet have an auto-loader for XCRI-CAP. We can load manually or via our CSV to XCRI facility.
Trainagain characteristics
- Existing system with its own established table structure, its own reference data and own courses data
- SQL Server technology
- I have off-line 'sandbox' version for playing around with.
Skills Development Scotland Bulk Import Facility characteristics
- XCRI-CAP 1.1 not 1.2
- Existing live XCRI-CAP aggregation service (push architecture)
- Works in conjunction with the PROMT data entry system
K-Int XCRI-CAP Aggregator characteristics
- Built on existing Open Data Aggregator, a generalised XML consuming service.
- Takes a 'relaxed' view to validation - not well-formed data can be imported.
- Outputs JSON, XML and HTML. But not XCRI-CAP.
These are early days for data aggregation using XCRI-CAP. There's been a chicken-and-egg situation for a while. Aggregating organisations won't readily invest in facilities to consume XCRI-CAP feeds until a large number of feeds exist, while HEIs don't see the need for a feed if no-one is ready to consume them. The Course Data Programme takes the second one of these (I guess that's the egg??) problems - if we have 63 XCRI-CAP feeds, then we should have a critical mass to provoke aggregating organisations to consume them.
Some of the questions around consumption of XCRI-CAP feeds centre on technical architecture issues (Push or Pull?), what type of feed to publish (SOAP, RESTful, just a file?), how often should the feed be updated and / or consumed (real-time updating? weekly?, quarterly? annually? Whenever stuff changes?), how do the feed owners know who's using it? (open access v improper usage, copyright and licencing). Some of these issues are inter-related, and there are other practical issues around consuming feeds for existing services - ensuring that reference data is taken into account, for example.
I'll try to tease out my impressions of the practicalities of consuming XCRI-CAP in various ways over the next few blog posts.
XCRI-CAP: turn 12 days of keying into 3 hours of checking.