A cache that utilizes memory, disk, and S3 for data storage and backup.
To install @push.rocks/levelcache
, you can use npm or yarn:
npm install @push.rocks/levelcache --save
or
yarn add @push.rocks/levelcache
This installs @push.rocks/levelcache
and adds it to your project's dependencies.
@push.rocks/levelcache
provides a comprehensive solution for multi-level caching that takes advantage of memory, disk, and Amazon S3 storage, making it a versatile tool for data caching and backup. The package is built with TypeScript, enabling strict type checks and better development experience. Below, we'll explore how to effectively employ @push.rocks/levelcache
in your projects, discussing its features and demonstrating its usage with code examples.
The LevelCache
class handles all cache operations. It decides where to store data based on pre-configured thresholds corresponding to the data size and the total storage capacity allocated for each storage type (memory/disk/S3). This mechanism optimizes both speed and persistence, allowing for efficient data storage and retrieval.
To use @push.rocks/levelcache
, you'll need to import the main classes: LevelCache
and CacheEntry
. LevelCache
is the primary class, while CacheEntry
represents individual pieces of cached data.
import { LevelCache, CacheEntry } from '@push.rocks/levelcache';
To create a cache, instantiate the LevelCache
class with desired configurations. You can specify the limits for memory and disk storage, setup S3 configurations if needed, and more.
const myCache = new LevelCache({
cacheId: 'myUniqueCacheId', // Unique ID for cache delineation
maxMemoryStorageInMB: 10, // Maximum memory use in MB (default 0.5 MB)
maxDiskStorageInMB: 100, // Maximum disk space in MB (default 10 MB)
diskStoragePath: './myCache', // Path for storing disk cache; default is '.nogit'
s3Config: {
accessKeyId: 'yourAccessKeyId', // AWS S3 access key
secretAccessKey: 'yourSecretAccessKey', // Corresponding secret key
region: 'us-west-2' // AWS region, e.g., 'us-west-2'
},
s3BucketName: 'myBucketName', // Designated name for S3 bucket
immutableCache: false, // Whether stored cache entries should remain unaltered
persistentCache: true, // Should the cache persist upon restarts
});
LevelCache
methods enable seamless data storage and retrieval, handling complexity under the hood.
Create a CacheEntry
specifying the data content and time-to-live (ttl
). Use storeCacheEntryByKey
to add this entry to the cache.
async function storeData() {
// Wait for cache to be ready before operations
await myCache.ready;
const entryContents = Buffer.from('Caching this data');
const myCacheEntry = new CacheEntry({
ttl: 7200000, // Time-to-live in milliseconds (2 hours)
contents: entryContents,
});
// Storing the cache entry associated with a specific key
await myCache.storeCacheEntryByKey('someDataKey', myCacheEntry);
}
Retrieve stored data using retrieveCacheEntryByKey
. The retrieved CacheEntry
will give access to the original data.
async function retrieveData() {
const retrievedEntry = await myCache.retrieveCacheEntryByKey('someDataKey');
if (retrievedEntry) {
const data = retrievedEntry.contents.toString();
console.log(data); // Expected output: Caching this data
} else {
console.log('Data not found or expired.');
}
}
Remove entries with deleteCacheEntryByKey
, enabling clean cache management.
async function deleteData() {
// Removes an entry using its unique key identifier
await myCache.deleteCacheEntryByKey('someDataKey');
}
Often, managing storage limits or removing outdated data becomes essential. The library supports these scenarios.
While cache entries will naturally expire with ttl
values, you can force-remove outdated entries.
// Clean outdated or expired entries
await myCache.cleanOutdated();
Clear all entries, efficiently resetting your cache storage.
// Flush entire cache content
await myCache.cleanAll();
The flexible nature of @push.rocks/levelcache
grants additional customization suited for more advanced requirements.
For certain demands, you might want to specify distinct data handling policies or routing logic.
- Adjust S3 handling, size thresholds, or immutability options dynamically.
- Utilize internal API expansions defined within the library for fine-grained operations.
Tailor the cache levels (memory, disk, S3) to accommodate higher loads:
const largeDatasetCache = new LevelCache({
cacheId: 'largeDatasetCache',
// Customize limits and behavior for particular patterns
maxMemoryStorageInMB: 1024, // 1 GB memory allocation
maxDiskStorageInMB: 2048, // 2 GB disk space allowance
maxS3StorageInMB: 10240, // 10 GB S3 backup buffering
});
With intelligent routing and management embedded, LevelCache
ensures optimal trade-offs between speed and stability.
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.