@h5web/app
v17.0.0
Published
H5Web app and providers
Readme
H5Web App & Providers
H5Web is a collection of React components to visualize and explore data. It consists of two main packages:
@h5web/lib: visualization components built with react-three-fiber.@h5web/app: a standalone, web-based viewer to explore HDF5 files (this library).
@h5web/app exposes the HDF5 viewer component App, as well as the following
built-in data providers:
H5GroveProviderfor use with server implementations based on H5Grove, like jupyterlab-h5web;HsdsProviderfor use with HSDS;MockProviderfor testing purposes.
Prerequisites
The react and react-dom dependencies must be installed in your project. Note
that as of version 10, @h5web/app requires React 18.
This package supports TypeScript out of the box without the need to install a
separate @types/ package.
Getting started 🚀
npm install @h5web/appimport '@h5web/app/styles.css';
import React from 'react';
import { App, MockProvider } from '@h5web/app';
function MyApp() {
return (
<div style={{ height: '100vh' }}>
<MockProvider>
<App />
</MockProvider>
</div>
);
}
export default MyApp;Examples
The following code sandboxes demonstrate how to set up and use @h5web/app with
various front-end development stacks:
Browser support
H5Web works out of the box on Firefox 102 ESR. Support for older versions
might be achieved by polyfilling specific web platform features like
Object.hasOwn().
API reference
App
Renders the HDF5 viewer.
For App to work, it must be wrapped in a data provider:
<MockProvider>
<App />
</MockProvider>sidebarOpen?: boolean (optional)
Whether the viewer should start with the sidebar open. The sidebar contains the
explorer and search panels. Defaults to true. Pass false to hide the sidebar
on initial render, thus giving more space to the visualization. This is useful
when H5Web is embedded inside another app.
<App sidebarOpen={false} />This replaces prop
explorerOpen, which was deprecated in v7.1.0 and removed in v8.0.0.
initialPath?: string (optional)
The path to select within the file when the viewer is first rendered. Defaults
to '/'.
<MockProvider>
<App initialPath="/arrays/threeD" />
</MockProvider>getFeedbackURL?: (context: FeedbackContext) => string (optional)
If provided, a "Give feedback" button appears in the breadcrumbs bar, which
invokes the function when clicked. The function should return a valid URL, for
instance a mailto: URL with a pre-filled subject and body:
mailto:[email protected]?subject=Feedback&body=<url-encoded-text>. If the app is
publicly available, we recommend returning the URL of a secure online contact
form instead.
<App getFeedbackURL={() => 'https://my-feedback-form.com'} /><App
getFeedbackURL={(context) => {
const {
filePath, // path of current file
entityPath, // path of currently selected entity
} = context;
return `mailto:[email protected]?subject=Feedback&body=${encodeURIComponent(...)}`;
}}
/>disableDarkMode?: boolean (optional)
By default, the viewer follows your browser's and/or operating system's dark mode setting. This prop disables this behavior by forcing the viewer into light mode.
<App disableDarkMode />propagateErrors?: boolean (optional)
The viewer has a top-level ErrorBoundary that, by default, handles errors
thrown outside of the visualization area. These include errors thrown by the
data provider when fetching metadata for the explorer. If you prefer to
implement your own error boundary, you may choose to let errors through the
viewer's top-level boundary:
import { ErrorBoundary } from 'react-error-boundary';
<ErrorBoundary FallbackComponent={MyErrorFallback}>
<MockProvider>
<App propagateErrors />
</MockProvider>
</ErrorBoundary>;H5GroveProvider
Data provider for H5Grove.
<H5GroveProvider url="https://h5grove.server.url" filepath="some-file.h5">
<App />
</H5GroveProvider>url: string (required)
The base URL of the H5Grove server.
filepath: string (required)
The path and/or name of the file to display in the UI.
fetcher?: Fetcher (optional)
An optional asynchronous function to fetch data and metadata from an h5grove
back-end. The function accepts a URL together with some query parameters and
options, and is expected to return a promise that resolves to an ArrayBuffer.
If fetcher is not provided, H5GroveProvider creates one based on the native
Fetch API using
createBasicFetcher
but without any form of authentication. In production, we recommend deploying a
secure h5grove back-end and initialising a fetcher that sends the expected
authentication headers.
If you have to initialise the
fetcherduring render, make sure to memoise it so the fetching cache isn't cleared every time your app re-renders.
getExportURL?: (...args) => URL | (() => Promise<URL | Blob>) | undefined (optional)
Some visualizations allow exporting the current dataset/slice to various formats. For instance, the Line visualization allows exporting to CSV and NPY; the Heatmap visualization to NPY and TIFF, etc.
For each format, the viewer invokes the provider's getExportURL method. If
this method returns a URL or an async function, then the export menu in the
toolbar shows an entry for the corresponding export format.
In the case of JSON and CSV, the viewer itself takes care of the export by
providing its own "exporter" function to the getExportURL method. When this
happens, the getExportURL method just returns a function that calls the
exporter.
In the case of NPY and TIFF, H5GroveApi#getExportURL returns a URL so the
export can be generated server-side by h5grove.
The optional getExportURL prop is called internally by the getExportURL
method and allows taking over the export process. It enables advanced use cases
like generating exports from an authenticated endpoint.
// Fetch export data from authenticated endpoint
getExportURL={(format, dataset, selection) => async () => {
const query = new URLSearchParams({ format, path: dataset.path, selection });
const response = await fetch(`${AUTH_EXPORT_ENDPOINT}?${query.toString()}`, {
headers: { /* authentication header */ }
})
return response.blob();
}}// Fetch a one-time export link
getExportURL={(format, dataset, selection) => async () => {
const query = new URLSearchParams({ format, path: dataset.path, selection });
const response = await fetch(`${AUTH_TOKEN_ENDPOINT}?${query.toString()}`, {
headers: { /* authentication header */ }
})
// Response body contains temporary, pre-authenticated export URL
return new URL(await response.body());
}}// Tweak a built-in export payload in some way (round or format numbers, truncate lines, etc.)
getExportURL={(format, dataset, selection, builtInExporter) => async () => {
if (!builtInExporter || format !== 'csv') {
return undefined;
}
const csvPayload = builtInExporter();
return csvPayload.split('\n').slice(0, 100).join('\n'); // truncate to first 100 lines
}}resetKeys?: unknown[] (optional)
You can pass variables in resetKeys that, when changed, will reset the
provider's internal fetch cache. You may want to do this, for instance, when the
content of the current file changes and you want the viewer to refetch the
latest metadata and dataset values.
It is up to you to decide what sort of keys to use and when to update them. For instance:
- Your server could send over a hash of the file via WebSocket.
- You could show a toast notification with a Refresh button when the file changes and simply increment a number when the button is clicked (cf. contrived example below).
function MyApp() {
const [key, setKey] = useState(0);
const incrementKey = useCallback(() => setKey((val) => val + 1), []);
return (
<>
<button type="button" onClick={incrementKey}>
Refresh
</button>
<H5GroveProvider resetKeys={[key]} /* ... */>
<App />
</H5GroveProvider>
</>
);
}HsdsProvider
Data provider for HSDS.
<HsdsProvider
url="https://hsds.server.url"
username="foo"
password="abc123"
filepath="/home/reader/some-file.h5"
>
<App />
</HsdsProvider>url: string (required)
The base URL of the HSDS server.
username: string; password: string (required)
The credentials to use to authenticate to the HSDS server. Note that this authentication mechanism is not secure; please do not use it to grant access to private data.
filepath: string (required)
The path of the file to request.
fetcher?: Fetcher (optional)
An asynchronous function to fetch data and metadata from an HSDS back-end. The
function accepts a URL together with some query parameters and options, and is
expected to return a promise that resolves to an ArrayBuffer. The fetcher must
also send the required HSDS authentication headers.
To get you started, if your HSDS back-end is configured with basic HTTP
authentication, you can use
createBasicFetcher
together with
buildBasicAuthHeader:
const fetcher = createBasicFetcher({
headers: buildBasicAuthHeader(USERNAME, PASSWORD),
});However, beware that this authentication mechanism is not secure — do not use it to grant access to private data.
If you have to initialise the
fetcherduring render, make sure to memoise it so the fetching cache isn't cleared every time your app re-renders.
getExportURL?: (...args) => URL | (() => Promise<URL | Blob>) | undefined (optional)
See
H5GroveProvider#getExportURL.
HsdsProvider doesn't support the NPY and TIFF export formats out of the box.
resetKeys?: unknown[] (optional)
See
H5GroveProvider#resetKeys.
MockProvider
Data provider for demonstration and testing purposes.
<MockProvider>
<App />
</MockProvider>getExportURL?: (...args) => URL | (() => Promise<URL | Blob>) | undefined (optional)
See
H5GroveProvider#getExportURL.
MockProvider doesn't support the NPY and TIFF export formats out of the box.
Utilities
createBasicFetcher: (fetchOpts?: Omit<RequestInit, 'signal'>) => Fetcher
Create a fetcher function based on the native
Fetch API. Accepts an optional
RequestInit
object to configure requests, for instance with an Authentication header:
const fetcher = createBasicFetcher({
headers: { Authorization: `Bearer ${token}` },
});To add custom query parameters to the requests:
const basicFetcher = createBasicFetcher();
async function fetcher(...args: Parameters<Fetcher>) {
const [url, params, opts] = args;
return basicFetcher(url, { ...params, myOwnParam }, opts);
}createAxiosFetcher: (axiosInstance: AxiosInstance) => Fetcher
Create a fetcher function from an
axios instance. Note that you will need to install
axios in your application.
const fetcher = createAxiosFetcher(axios.create({ adapter: 'fetch' }));
function MyApp() {
//...
return (
<H5GroveProvider url={URL} filepath={FILE} fetcher={fetcher}>
{/*...*/}
</H5GroveProvider>
);
}You can configure the axios instance
as you see fit. For instance, you can specify authentication headers, or set up
request interceptors to refresh
tokens and retry requests automatically. However, do note that some options have
no effect, notably url/baseUrl, responseType, signal and
onDownloadProgress.
buildBasicAuthHeader: (username: string, password: string) => Record<string, string>
Build an Authorization header for basic HTTP authentication from a username
and password.
getFeedbackMailto: (context: FeedbackContext, email: string, subject?) => string
Generate a feedback mailto: URL using H5Web's built-in feedback email
template.
(context: FeedbackContext, email: string, subject = 'Feedback') => string;import { getFeedbackMailto } from '@h5web/app';
...
<App getFeedbackURL={(context) => {
return getFeedbackMailto(context, '[email protected]');
}} />enableBigIntSerialization: () => void
Invoke this function before rendering your application to allow the Scalar visualization and metadata inspector to serialize and display big integers:
enableBigIntSerialization();
createRoot(document.querySelector('#root')).render(<MyApp />);This is recommended if you work with a provider that supports 64-bit integers —
i.e. one that may provide dataset and attribute values that include primitive
bigint numbers — currently only MockProvider.
The Scalar visualization and metadata inspector rely on JSON.stringify() to
render dataset and attribute values. By default, JSON.stringify() does not
know how to serialize bigint numbers and throws an error if it encounters one.
enableBigIntSerialization() teaches JSON.stringify() to convert big integers
to strings:
> JSON.stringify(123n);
TypeError: Do not know how to serialize a BigInt
> enableBigIntSerialization();
> JSON.stringify(123n);
"123n"The
nsuffix (i.e. the same suffix used forbigintliterals as demonstrated above) is added to help distinguish big integer strings from other strings.
If you're application already implements
bigintserialization, you don't need to callenableBigIntSerialization(). Doing so would override the existing implementation, which might have unintended effects.
Context
The viewer component App communicates with its wrapping data provider through
a React context called DataContext. This context is available via a custom
hook called useDataContext. This means you can use the built-in data providers
in your own applications:
<MockProvider>
<MyApp />
</MockProvider>;
function MyApp() {
const { filename } = useDataContext();
return <p>{filename}</p>;
}useDataContext returns the following object:
interface DataContextValue {
filepath: string;
filename: string;
entitiesStore: EntitiesStore;
valuesStore: ValuesStore;
attrValuesStore: AttrValuesStore;
}The three stores are created with the
react-suspense-fetch library,
which relies on React Suspense. A
component that uses one of these stores (e.g.
entitiesStore.get('/path/to/entity')) must have a Suspense ancestor to
manage the loading state.
<MockProvider>
<Suspense fallback={<span>Loading...</span>}>
<MyApp />
</Suspense>
</MockProvider>;
function MyApp() {
const { entitiesStore } = useDataContext();
const group = entitiesStore.get('/resilience/slow_metadata');
return <pre>{JSON.stringify(group, null, 2)}</pre>;
}A common need is to find specific datasets in a file and retrieve their values.
You can do so with hooks useDatasets and useValues as follows:
const DATASETS_DEFS = {
twoD: { path: '/twoD', shape: ShapeClass.Array, type: DTypeClass.Float }
title: { path: '/title', shape: ShapeClass.Scalar, type: DTypeClass.String }
};
function MyApp() {
const datasets = useDatasets(DATASETS_DEFS);
const { twoD, title } = useValues(datasets); // `number[] | TypedArray` and `string` respectively
// Or if you just need a specific slice from the `twoD` dataset:
const { slice, title } = useValues({
slice: { dataset: datasets.twoD, selection: '2,:' },
title: dataset.title,
})
}We also provide two simpler hooks, useEntity and useValue, as well as a
large number of type guards and assertion functions to narrow down the
kind/shape/type of HDF5 entities returned by useEntity.
const entity = useEntity('/arrays/twoD'); // ProvidedEntity
assertDataset(entity); // Dataset
assertArrayShape(entity); // Dataset<ArrayShape>
assertFloatType(entity); // Dataset<ArrayShape, FloatType>
const value = useValue(entity); // number[] | TypedArray
// Or a specific slice:
const slice = useValue(entity, '2,:');Once you have a raw value array, you can use the memoised hook useNdArray to
wrap it in an ndarray, and then pass it down to a visualization component from
@h5web/lib:
const value = useValue(entity); // number[] | TypedArray
const dataArray = useNdArray(value, entity.shape); // NdArray<number[] | TypedArray>
const domain = useDomain(dataArray); // [number, number]
return (
<HeatmapVis
style={{ width: '100vw', height: '100vh' }}
dataArray={dataArray}
domain={domain}
/>
);Every store comes with a prefetch method that works like get but doesn't
trigger the Suspense boundary and doesn't return a value. If you work with a
remote provider like H5Grove and need to access multiple entities/values at
once, it's important to prefetch every entity/value first so the requests are
done in parallel. useDatasets and useValues do this automatically, but not
useEntity and useValue:
const { valuesStore } = useDataContext();
valuesStore.prefetch(abscissasDataset);
valuesStore.prefetch(ordinatesDataset);
const abscissas = useValue(abscissasDataset);
const ordinates = useValue(ordinatesDataset);To work with HDF5 attributes, retrieve an entity object with useEntity or
useDatasets and pass it to findAttribute. Then, you can check or assert its
type and shape and retrieve its value with getAttributeValue:
const { attrValuesStore } = useDataContext();
const entity = useEntity('/arrays/twoD'); // ProvidedEntity
// If you just want to know whether the attribute is present
const hasAttr = hasAttribute(entity, 'my_attr'); // boolean
// Otherwise, find it
const attribute = findAttribute(entity, 'my_attr'); // Attribute | undefined
// If the attribute must be present and have the expected shape and type, use type assertions
assertDefined(attribute);
assertArrayShape(attribute);
assertStringType(attribute); // now `Attribute & HasShape<ArrayShape> & HasType<StringType>`
// Otherwise, use type guards and an `if` block
if (
isDefined(attribute) &&
hasArrayShape(attribute) &&
hasStringType(attribute)
) {
const someStr = getAttributeValue(entity, attribute, attrValuesStore); // string
someStr.startWith('foo'); // `someStr` is fully type-checked; no need to use `typeof`
}With scalar string and numeric attributes, use findScalarStrAttr and
findScalarNumAttr for convenience:
const strAttr = findScalarStrAttr(entity, 'my_str_attr');
const numAttr = findScalarNumAttr(entity, 'my_num_attr');
assertDefined(strAttr); // or `isDefined` + `if` block
assertDefined(numAttr);
const str = getAttributeValue(entity, strAttr, attrValuesStore); // string
const num = getAttributeValue(entity, numAttr, attrValuesStore); // number | bigint