@asaidimu/utils-artifacts
v6.0.1
Published
Reactive artifact container.
Readme
@asaidimu/utils-artifacts
A powerful TypeScript library for managing application components (artifacts) with dependency injection, reactive state updates, and robust lifecycle management. This library provides a flexible container to define, resolve, and orchestrate various parts of your application, each reacting dynamically to state changes and dependencies.
🚀 Quick Links
- Overview & Features
- Installation & Setup
- Usage Documentation
- Project Architecture
- Development & Contributing
- Additional Information
✨ Overview & Features
@asaidimu/utils-artifacts is a reactive dependency injection container designed to bring order and efficiency to complex application architectures. It allows you to define application components (called "artifacts") as factories that declare their dependencies on other artifacts and global application state. The container automatically manages the lifecycle, instantiation, and invalidation of these artifacts, ensuring that components are always up-to-date with their dependencies.
Key Features
- Dependency Injection (DI): Declare dependencies within artifact factories using
ctx.use,ctx.resolve, andctx.require. - Reactive State Management: Automatically re-evaluate artifacts when slices of a global
DataStore(or any compatible state management system) change viactx.select. - Flexible Scoping: Supports
Singleton(single, cached instance) andTransient(new instance per resolution) artifact scopes. - Advanced Lifecycle Management:
onCleanupfor instance-specific cleanup andonDisposefor permanent resource release. - Streaming Artifacts: Use
ctx.streamto build long-lived artifacts that continuously emit new values, ideal for real-time data, web sockets, or periodic tasks. - Robust Error Handling & Retries: Integrated retry logic (
Retryutility) for resilient operations and a comprehensiveArtifactErrorhierarchy for clear diagnostics. - Concurrency Primitives: Internal
OnceandSerializerutilities for managing race conditions and sequential execution of async operations. - Debuggability:
container.debugInfo()provides a snapshot of the artifact graph, statuses, and dependencies for easy troubleshooting. - Watcher API:
container.watch()allows external consumers to subscribe to artifact value changes without directly resolving them. - Circular Dependency Detection: Prevents infinite loops during resolution by detecting and reporting cycles in the dependency graph.
📦 Installation & Setup
Prerequisites
- Node.js (LTS recommended)
- npm or Yarn (package manager)
- A reactive data store library that adheres to the
DataStoreinterface (e.g.,@asaidimu/utils-store).
Installation Steps
To install the library, use your preferred package manager:
npm install @asaidimu/utils-artifacts
# or
yarn add @asaidimu/utils-artifactsConfiguration
@asaidimu/utils-artifacts is a library and does not require global configuration files. Its primary setup involves initializing the ArtifactContainer with an instance of a DataStore.
Verification
You can verify the installation by attempting to import and register a basic artifact:
import { ArtifactContainer } from '@asaidimu/utils-artifacts';
import { ReactiveDataStore } from '@asaidimu/utils-store'; // Assuming you have this store
// Define your application's global state and artifact registry types
type AppState = { count: number; };
type AppRegistry = { myService: string; };
const store = new ReactiveDataStore<AppState>({ count: 0 });
const container = new ArtifactContainer<AppRegistry, AppState>(store);
container.register({
key: 'myService',
factory: async ({ use }) => {
const currentCount = await use(({ select }) => select(s => s.count));
return `Service is running with count: ${currentCount}`;
},
});
async function runExample() {
const service = await container.resolve('myService');
console.log(service.instance); // Expected: Service is running with count: 0
}
runExample();📖 Usage Documentation
Basic Usage
The core of the library is the ArtifactContainer. You initialize it with a DataStore that manages your application's global state.
import { ArtifactContainer, ArtifactScopes } from '@asaidimu/utils-artifacts';
import { ReactiveDataStore } from '@asaidimu/utils-store';
// 1. Define your application's global state and artifact registry types
interface AppState {
appName: string;
version: string;
config: {
theme: 'light' | 'dark';
apiUrl: string;
};
}
interface AppRegistry {
logger: LoggerService;
apiClient: ApiClient;
themeService: 'light' | 'dark';
appInfo: string;
}
// Example DataStore (from @asaidimu/utils-store)
const appStore = new ReactiveDataStore<AppState>({
appName: 'My App',
version: '1.0.0',
config: {
theme: 'light',
apiUrl: 'https://api.example.com',
},
});
// 2. Instantiate the ArtifactContainer
const container = new ArtifactContainer<AppRegistry, AppState>(appStore);
// 3. Define and register your artifact factories
class LoggerService {
constructor(private prefix: string) {}
log(message: string) {
console.log(`[${this.prefix}] ${message}`);
}
}
container.register({
key: 'logger',
scope: ArtifactScopes.Singleton, // Ensure only one instance of the logger
factory: () => new LoggerService('APP'),
});
container.register({
key: 'apiClient',
scope: ArtifactScopes.Singleton,
factory: async ({ use }) => {
// ApiClient depends on the logger and state.config.apiUrl
const logger = await use(({ require }) => require('logger'));
const apiUrl = await use(({ select }) => select(state => state.config.apiUrl));
logger.log(`Initializing API client for ${apiUrl}`);
return {
fetchData: async (path: string) => {
logger.log(`Fetching from ${apiUrl}${path}`);
// Simulate API call
await new Promise(res => setTimeout(res, 100));
return { message: `Data from ${apiUrl}${path}` };
}
};
},
});
container.register({
key: 'themeService',
scope: ArtifactScopes.Singleton,
factory: async ({ use }) => {
// Theme service depends directly on state
return await use(({ select }) => select(state => state.config.theme));
},
});
container.register({
key: 'appInfo',
scope: ArtifactScopes.Singleton,
factory: async ({ use }) => {
// App info depends on other artifacts and state
const logger = await use(({ require }) => require('logger'));
const apiClient = await use(({ require }) => require('apiClient'));
const appName = await use(({ select }) => select(s => s.appName));
const data = await apiClient.fetchData('/info');
logger.log(`App Info: ${appName}, Data: ${data.message}`);
return `App: ${appName}, API Data: ${data.message}`;
},
});
// 4. Resolve artifacts to use them
async function main() {
const logger = await container.resolve('logger');
logger.instance?.log('Application started.');
const appInfo = await container.resolve('appInfo');
console.log(appInfo.instance); // Logs the app information after API call
// Example of reacting to state changes
console.log('\n--- Changing theme ---');
await appStore.set(s => ({ ...s, config: { ...s.config, theme: 'dark' } }));
// Because themeService depends on state.config.theme, it will be rebuilt
// Any artifact that depends on themeService would also be rebuilt.
const newTheme = await container.resolve('themeService');
console.log('New Theme:', newTheme.instance); // Expected: dark
}
main();Registering Artifacts
Use container.register() to define an artifact. Each artifact requires a unique key and a factory function.
import { ArtifactScopes } from '@asaidimu/utils-artifacts';
container.register({
key: 'myArtifact',
factory: () => 'Hello, Artifact!',
// Optional parameters:
scope: ArtifactScopes.Singleton, // 'singleton' (default) or 'transient'
lazy: true, // true (default) for singletons, false to build on registration
timeout: 5000, // Max time in ms for factory to complete
retries: 3, // Number of retries on factory failure
debounce: 100, // Delay in ms for rebuilding on dependency changes
});The factory function receives an ArtifactFactoryContext object:
interface ArtifactFactoryContext<TRegistry, TState, TArtifact> {
state(): TState; // Get current global state (non-reactive)
previous?: TArtifact; // Previous instance (for singletons on rebuild)
use<R>(callback: (ctx: UseDependencyContext<TRegistry, TState>) => R | Promise<R>): Promise<R>;
onCleanup(cleanup: ArtifactCleanup): void; // Register cleanup for current instance
onDispose(callback: ArtifactCleanup): void; // Register cleanup for artifact (permanent)
stream(callback: (ctx: ArtifactStreamContext<TState, TArtifact>) => ...): void; // Start streaming values (singletons only)
}
interface UseDependencyContext<TRegistry, TState> {
resolve<K extends keyof TRegistry>(key: K): Promise<ResolvedArtifact<TRegistry[K]>>; // Resolve an artifact (returns ResolvedArtifact)
require<K extends keyof TRegistry>(key: K): Promise<TRegistry[K]>; // Resolve an artifact (throws on error, returns instance directly)
select<S>(selector: (state: TState) => S): S; // Select state slice (reactive)
}Resolving & Requiring Artifacts
container.resolve(key): Returns aPromise<ResolvedArtifact<T>>.ResolvedArtifactis a union type that can beReadyArtifact,ErrorArtifact, orPendingArtifact. You should check thereadyanderrorproperties.container.require(key): Returns aPromise<T>directly. If resolution fails or the artifact has an error, it will throw the error. Use this when you are certain the artifact will resolve successfully.
import { ArtifactError } from '@asaidimu/utils-artifacts';
// Using resolve (recommended for robust error handling)
const myArtifactResult = await container.resolve('myArtifact');
if (myArtifactResult.ready) {
console.log('Artifact instance:', myArtifactResult.instance);
} else if (myArtifactResult.error) {
console.error('Artifact failed to resolve:', myArtifactResult.error);
} else {
console.log('Artifact is pending/idle.');
}
// Using require (for simpler usage when errors are handled upstream or unexpected)
try {
const myArtifactInstance = await container.require('myArtifact');
console.log('Artifact instance:', myArtifactInstance);
} catch (error) {
if (error instanceof ArtifactError) {
console.error('Artifact system error:', error.message);
} else {
console.error('Artifact runtime error:', error);
}
}Watching Artifact Changes
The watch() method provides an observer pattern to react to artifact changes without needing to repeatedly call resolve(). It's particularly useful for UI frameworks.
const myServiceWatcher = container.watch('myService');
const unsubscribe = myServiceWatcher.subscribe((resolvedArtifact) => {
if (resolvedArtifact.ready) {
console.log('myService updated:', resolvedArtifact.instance);
} else if (resolvedArtifact.error) {
console.error('myService error:', resolvedArtifact.error);
}
// The `get()` method can also be used inside the callback or outside
// to get the current state of the artifact.
const current = myServiceWatcher.get();
console.log('Current state from get():', current.instance);
});
// To stop receiving updates:
unsubscribe();Reactive Dependencies (State Selection)
Artifacts can react to changes in the global DataStore by using ctx.select().
interface UserSettings { userId: string; theme: string; };
type AppRegistry = { userPreference: string };
const userStore = new ReactiveDataStore<UserSettings>({ userId: 'guest', theme: 'light' });
const userContainer = new ArtifactContainer<AppRegistry, UserSettings>(userStore);
userContainer.register({
key: 'userPreference',
factory: async ({ use }) => {
// This artifact will be rebuilt if state.theme changes
const theme = await use(({ select }) => select(state => state.theme));
return `Current theme is: ${theme}`;
},
});
async function runReactiveExample() {
let preference = await userContainer.resolve('userPreference');
console.log(preference.instance); // Output: Current theme is: light
// Update the store, which will trigger 'userPreference' to rebuild
await userStore.set({ theme: 'dark' });
// Resolve again to get the new instance
preference = await userContainer.resolve('userPreference');
console.log(preference.instance); // Output: Current theme is: dark
}
runReactiveExample();Artifact Lifecycle (Cleanup & Dispose)
ctx.onCleanup(fn): Registers a function to run before a singleton artifact is rebuilt (due to invalidation) or before a transient artifact instance is discarded. Use this for instance-specific resource release (e.g., clearing timers, event listeners).ctx.onDispose(fn): Registers a function to run only when the artifact is permanently unregistered from the container or the container itself is disposed. Use this for permanent resource release (e.g., closing database connections, unsubscribing from global events).
container.register({
key: 'myResource',
scope: ArtifactScopes.Singleton,
factory: ({ onCleanup, onDispose }) => {
const resource = { id: Math.random(), intervalId: setInterval(() => {}, 1000) };
console.log(`Resource ${resource.id} created.`);
onCleanup(() => {
console.log(`Cleaning up instance ${resource.id}...`);
clearInterval(resource.intervalId);
});
onDispose(() => {
console.log(`Disposing artifact 'myResource'.`);
// Additional permanent resource release here
});
return resource;
},
});
async function lifecycleExample() {
await container.resolve('myResource');
// Simulate an invalidation (e.g., a dependency changed)
await container.invalidate('myResource'); // Triggers cleanup, then rebuilds
await container.resolve('myResource'); // A new instance is now resolved.
// When unregistering, onDispose is called
await container.unregister('myResource'); // Triggers onCleanup (if active), then onDispose
}
lifecycleExample();Streaming Artifacts
Singletons can continuously emit new values using ctx.stream(). This is powerful for reactive data sources.
container.register({
key: 'counterStream',
scope: ArtifactScopes.Singleton,
factory: ({ stream, onCleanup }) => {
let count = 0;
let interval: ReturnType<typeof setInterval>;
stream(async ({ emit, signal }) => {
console.log('Counter stream started...');
interval = setInterval(() => {
if (signal.aborted) {
console.log('Stream aborted, stopping interval.');
clearInterval(interval);
return;
}
count++;
emit(count); // Emit the new value
if (count >= 5) {
console.log('Count limit reached, stopping stream.');
clearInterval(interval);
return; // Stream producer can return to end the stream
}
}, 500);
});
onCleanup(() => {
console.log('Cleaning up counter stream instance...');
// Ensure interval is cleared if stream is aborted/rebuilt
clearInterval(interval);
});
return 0; // Initial value before stream starts emitting
},
});
async function streamExample() {
const watcher = container.watch('counterStream');
const unsubscribe = watcher.subscribe((art) => {
if (art.ready) console.log('Counter value:', art.instance);
});
// Keep alive for a few seconds to see stream emissions
await new Promise(res => setTimeout(res, 3000));
unsubscribe();
console.log('Stream watcher unsubscribed.');
}
streamExample();Invalidating Artifacts
You can manually trigger an artifact to rebuild, which will also cascade invalidations to its dependents.
// Invalidate a specific artifact
await container.invalidate('myArtifact');
// Force immediate rebuild, bypassing any debounce delay
await container.invalidate('myArtifact', true);Debugging Artifacts
The debugInfo() method provides a snapshot of the container's internal state, useful for understanding dependencies, status, and build counts.
const debugNodes = container.debugInfo();
debugNodes.forEach(node => {
console.log(`\nID: ${node.id}`);
console.log(` Scope: ${node.scope}`);
console.log(` Status: ${node.status}`); // active, error, idle, building, pending, debouncing
console.log(` Dependencies (Artifacts): ${node.dependencies.join(', ') || 'None'}`);
console.log(` Dependencies (State Paths): ${node.stateDependencies.join(', ') || 'None'}`);
console.log(` Dependents: ${node.dependents.join(', ') || 'None'}`);
console.log(` Build Count: ${node.buildCount}`);
});Retry Utility
The Retry class provides flexible retry logic for any async operation, supporting various strategies.
import { Retry, RetryExhaustedError, RetryPredicates } from '@asaidimu/utils-artifacts/retry';
const unreliableOperation = async (attempt: number) => {
if (attempt < 3) {
console.log(`Unreliable operation failing on attempt ${attempt}`);
throw new Error('Transient network error');
}
console.log(`Unreliable operation succeeding on attempt ${attempt}`);
return 'Success!';
};
async function runRetryExample() {
try {
const result = await Retry.execute(
() => unreliableOperation(retryAttempt), // `retryAttempt` is for demonstration, actual attempt is internal.
{
retries: 4, // Total attempts: 1 (initial) + 4 (retries) = 5
strategy: 'exponential',
delay: 100, // Base delay 100ms
factor: 2, // Multiplier 2x (100, 200, 400, 800)
maxDelay: 1000,
onRetry: (err, attempt, nextDelay) => {
console.warn(`Attempt ${attempt} failed: ${err}. Retrying in ${nextDelay}ms.`);
retryAttempt = attempt; // For demonstration only
},
}
);
console.log('Retry successful:', result);
} catch (e) {
if (e instanceof RetryExhaustedError) {
console.error(`Retry exhausted after ${e.attempts} attempts. Last error:`, e.lastError);
} else {
console.error('Unexpected error:', e);
}
}
// Example with conditional retry based on error type
const fetchWithRetry = async (url: string) => {
return Retry.execute(
async () => {
const response = await fetch(url);
if (response.status >= 500) {
throw { status: response.status, message: 'Server error' }; // Simulate HTTP 5xx
}
return response.json();
},
{
retries: 5,
strategy: 'conditional',
shouldRetry: RetryPredicates.any(
RetryPredicates.networkErrors,
RetryPredicates.serverErrors,
RetryPredicates.httpStatus(429) // Also retry on Too Many Requests
),
delay: (attempt) => Math.min(100 * Math.pow(2, attempt), 2000), // Custom delay function
}
);
};
// const data = await fetchWithRetry('https://api.example.com/data');
}
let retryAttempt = 0; // Used for demonstration purposes only
runRetryExample();Concurrency Utilities (Once & Serializer)
Once ensures a function runs exactly one time, caching its result. Serializer ensures functions run sequentially. These are primarily used internally but are exposed for advanced use cases.
import { Once, Serializer } from '@asaidimu/utils-artifacts/sync';
async function runOnceExample() {
const initialization = new Once<string>();
const expensiveInit = async () => {
console.log('Performing expensive initialization...');
await new Promise(res => setTimeout(res, 200));
return 'Initialized Resource';
};
const [res1, res2, res3] = await Promise.all([
initialization.do(expensiveInit),
initialization.do(expensiveInit),
initialization.do(expensiveInit),
]);
console.log(res1.value, res2.value, res3.value); // All will be 'Initialized Resource'
// expensiveInit will be called only once.
}
runOnceExample();
async function runSerializerExample() {
const queue = new Serializer<string>();
const order: string[] = [];
const task1 = async () => {
await new Promise(res => setTimeout(res, 100));
order.push('Task 1');
return 'Result 1';
};
const task2 = async () => {
order.push('Task 2');
return 'Result 2';
};
const task3 = async () => {
await new Promise(res => setTimeout(res, 50));
order.push('Task 3');
return 'Result 3';
};
await Promise.all([
queue.do(task1),
queue.do(task2),
queue.do(task3),
]);
console.log(order); // Expected: ['Task 1', 'Task 2', 'Task 3']
}
runSerializerExample();🏗️ Project Architecture
The ArtifactContainer is designed with a modular architecture, delegating responsibilities to specialized internal components:
ArtifactContainer: The public API entry point. It orchestrates interactions between the other internal components, providing a unified interface for artifact management.ArtifactRegistry: Stores the definitions (ArtifactTemplates) of all registered artifacts, mapping unique keys to their factory functions and configuration options.ArtifactCache: Manages the storage and retrieval of resolved artifact instances, particularly forSingletonscoped artifacts. It handles caching, versioning, and state associated with active instances.ArtifactDependencyGraph: A bidirectional graph (DependencyGraph) that maps artifact-to-artifact dependencies and tracks which artifacts depend on state paths. This is crucial for circular dependency detection and efficient invalidation cascades.ArtifactManager: The core lifecycle manager. It handles the intricate process of building artifacts (executing factories), managing retries and timeouts, orchestratingonCleanup/onDispose, propagating stream emissions, and managing the reactive invalidation process based on artifact and state dependencies.ArtifactObserverManager: Manages thecontainer.watch()API, maintaining subscriptions from external consumers and notifying them of artifact state changes. It handles reference counting and lazy initialization of watched artifacts.Retry: A standalone utility providing flexible retry mechanisms (fixed, exponential, jittered, conditional) for any asynchronous operation.Once&Serializer: Low-level concurrency primitives used internally (and exposed) to ensure that asynchronous operations (like artifact builds or stream emissions) execute safely and predictably, avoiding race conditions.
Data Flow
- Registration: An
ArtifactTemplateis registered with theArtifactRegistry. A corresponding node is added to theArtifactDependencyGraph. - Resolution (
container.resolve):- The
ArtifactContainerforwards the request to theArtifactManager. - The
Managerconsults theArtifactCache. If aSingletonis already built and fresh, it's returned immediately. - Otherwise, the
Managerretrieves theArtifactTemplatefrom theRegistry. - A factory is executed with an
ArtifactFactoryContext. - Inside the factory:
ctx.use(({ resolve }) => ...): Triggers recursive resolution of dependent artifacts. TheManagerperforms cycle detection via theDependencyGraphand updates artifact dependencies.ctx.use(({ select }) => ...): Registers state path dependencies with theManager, which then subscribes to these paths via theDataStore.ctx.stream(...): For singletons, registers a stream producer that canemitnew values.ctx.onCleanup/onDispose: Registers lifecycle hooks.
- The
Managerhandles retries for factory execution and commits the resulting instance (or error) to theArtifactCache. - The
ArtifactCachepackages the result into aResolvedArtifactfor the consumer.
- The
- Invalidation:
- State Change: A change in the
DataStore(observed byArtifactManagerviastore.watch()) triggers invalidation of dependent artifacts. - Artifact Change: A dependency being rebuilt or a stream emitting a new value, or manual
container.invalidate()triggers invalidation. - The
ArtifactManagerruns theonCleanuphooks for the old instance, removes it from theArtifactCache, and uses theDependencyGraphto identify and cascade invalidations to all affected dependents. - If configured (e.g., not lazy, has watchers), the
Managertriggers a rebuild for the artifact. - Finally,
ArtifactObserverManagernotifies all active watchers.
- State Change: A change in the
Extension Points
The primary extension point is the ArtifactFactory function itself, which receives the ArtifactFactoryContext. This context allows artifacts to:
- Declare and react to external dependencies (other artifacts, global state).
- Manage their internal lifecycle (cleanup, disposal).
- Create streaming data sources.
- Integrate with the underlying
DataStorefor dispatching actions.
🛠️ Development & Contributing
Development Setup
To set up the project for local development:
- Clone the repository:
git clone https://github.com/asaidimu/erp-utils.git cd erp-utils/src/artifacts - Install dependencies:
npm install # or yarn install - Build the project (if applicable, though typically handled by IDE/watch mode):
npm run build # Or `tsc` if not defined in package.json scripts
Scripts
The package.json defines the following scripts:
npm test: Runs all tests once.npm test:watch: Runs tests in watch mode, rerunning on file changes.npm test:browser: Runs tests in a browser environment (if configured), typically once.
Testing
The project uses vitest for testing.
- To run all tests:
npm test - To run tests continuously during development:
npm test:watch
Tests utilize fake-indexeddb as seen in vitest.setup.ts, ensuring a consistent environment.
Contributing Guidelines
We welcome contributions! Please follow these guidelines:
- Fork the repository and create your branch from
main. - Ensure code quality: Write clean, readable TypeScript code. Adhere to existing coding style (ESLint and Prettier are typically configured in parent project).
- Tests: All new features and bug fixes should be accompanied by appropriate unit or integration tests. Ensure existing tests pass.
- Commit Messages: Use Conventional Commits for clear and consistent commit history. (e.g.,
feat: add new artifact scope,fix: resolve circular dependency issue). - Pull Requests:
- Open a detailed Pull Request describing the changes, new features, or bug fixes.
- Reference any related issues.
- Ensure your branch is up-to-date with
main.
Issue Reporting
If you find a bug or have a feature request, please open an issue on the GitHub Issues page. Provide as much detail as possible, including steps to reproduce bugs and clear descriptions for feature requests.
ℹ️ Additional Information
Troubleshooting
ArtifactNotFoundError: This means you're trying toresolveorwatchan artifact that hasn't beenregistered. Double-check your artifact keys and ensure registration happens before resolution.CircularDependencyError: This occurs when your artifact graph forms a loop (e.g., A depends on B, B depends on A). ThedebugInfo()output and the error message's path can help you identify the cycle. Redesign your dependencies to break the cycle.TimeoutError: Your artifact factory took longer than the specifiedtimeoutduring registration. This can indicate long-running sync operations, slow async dependencies, or an infinite loop. Increase the timeout or optimize your factory.- Unexpected Rebuilds/No Rebuilds:
- Use
container.debugInfo()to inspect artifact statuses andstateDependencies/dependencies. - Ensure your
ctx.select()selectors correctly identify the state slices you intend to react to. - Check
debouncesettings if an artifact seems to rebuild too frequently or too slowly. - Verify
onCleanupandonDisposehooks are being called as expected to rule out resource leaks.
- Use
FAQ
- What's the difference between
SingletonandTransientscopes?Singleton: Only one instance of the artifact is ever created. Subsequentresolvecalls return the same instance. This is suitable for services, configurations, or shared resources. They support state dependencies,onCleanup/onDispose, andstream.Transient: A new instance is created every time the artifact is resolved. Useful for ephemeral objects that should not be shared or cached. They do not supportstreamor persistentonCleanup/onDispose(though a singlecleanupcan be returned byresolve).
- When should I use
resolveversusrequire?- Use
resolvewhen you need to defensively handle potential errors or pending states of an artifact. It returns aResolvedArtifactobject that allows explicit checks (.ready,.error). - Use
requirewhen you are confident the artifact will resolve successfully and prefer to get the instance directly, allowing errors to propagate as exceptions. This simplifies code where error handling is done at a higher level.
- Use
- How does
debouncework for invalidation?debounceadds a delay (in milliseconds) before an artifact rebuilds after its dependencies change. If multiple dependency changes occur within this debounce period, the rebuild is aggregated into a single event, preventing excessive rapid rebuilds.container.invalidate(key, true)can bypass this debounce. - What is the purpose of
onCleanupvs.onDispose?onCleanup: Tied to the current instance's lifecycle. It runs when aSingletonartifact's instance is replaced (e.g., due to an invalidation and rebuild), or when aTransientinstance is returned and subsequently discarded. Use for resources specific to that particular instance.onDispose: Tied to the artifact's registration lifecycle. It runs only when the artifact is permanentlyunregistered from the container or when the container itself isdisposed. Use for resources that should persist across instance rebuilds but be released when the artifact itself is no longer managed.
Changelog/Roadmap
For detailed version history, please refer to the CHANGELOG.md in the main repository. Future plans and roadmap items are tracked via GitHub issues and milestones.
License
This project is licensed under the MIT License.
Acknowledgments
This library is part of the @asaidimu/erp-utils monorepository and integrates closely with @core/store/types (specifically, the DataStore interface) for reactive state management.
