@synanetics/fhir-transform
v0.10.6
Published
Perform FHIR $transform operations
Maintainers
Keywords
Readme
FHIR Transform Package
Performs FHIR $transform operations
Features
Interprets FHIR StructureMaps and applies their transformations to the input parameter(s), performing cardinality corrections against a supplied StructureDefinition. The output resource(s) are explicitly not validated against the supplied StructureDefinition(s), as per the FHIR specifications.
Example usage
import transform from '@synanetics/fhir-transform';
const structureMap = {
// structureMap FHIR resource
};
const content = {
// source resource
};
// your structureDefinitionResolver can be as elaborate as you like, perhaps returning from a URL or a cache
const structureDefinitionResolver = async (url: string): Promise<any> => {
switch (url) {
case 'http://example.com/StructureDefinition/CustomResource':
return {
// profile definition
};
// whatever else
}
};
// Your conceptMapResolver can be elaborate too, but needs to return the URL as the key object key.
// This is done so that you can choose to "follow" links to other ConceptMaps that may exist inside the
// first ConceptMap returned by the resolver.
const conceptMapResolver = async (url: string): Promise<Record<string, any>> => {
const record: Record<string, any> = {};
const map = await fetch(url);
record[url] = await map.json();
if (map.group[0].unmapped?.url) {
const otherMap = await fetch(map.group[0].unmapped.url);
record[map.group[0].unmapped.url] = await otherMap.json();
}
};
// the structureMapResolver works exactly the same as the conceptMapResolver, so is not shown here.
const result = await transform({
content,
structureMap,
structureDefinitionResolver,
conceptMapResolver,
});Performance Benchmarking
Performing the $transform operation is both memory and CPU intensive, as large volumes of reference data need to be accessible in order to assert that data items are of the correct type and in order to perform identity transforms according to their default mapping groups.
Introducing changes that degrade performance is unfortunately too easy. In order to offer some protection against that, there is a set of tests that will fail if memory consumption grows too much, or if total processing time increases beyond a set threshold. These tests should be taken seriously.
They will run as part of the usual suite and can be run separately using:
pnpm test:performanceAdditionally, there is a script that performs a similar suite of tests, but produces a nicer-formatted output, with some additional metrics in it, which can be run using:
pnpm benchmark