@supercat1337/textfile-reader
v1.0.2
Published
A utility for reading large text files line by line. It saves the state of the last read line number in a file, allowing it to resume reading from the same position in case of an error.
Maintainers
Readme
TextFile Reader
The TextFileReader is a high-performance utility for reading large text files line by line with state persistence. It saves the last read position in a settings file, allowing seamless resumption after interruptions, errors, or application restarts.
Features
- 📖 Efficient large file reading - Handles files of any size using stream processing
- 💾 State persistence - Automatically saves reading progress to resume later
- ⏸️ Flow control - Pause and resume reading at any time
- 🔄 Flexible starting points - Start from beginning, last position, or any specific line
- 🛡️ Error recovery - Robust error handling with automatic state saving
- 📊 Utility methods - Count lines, check status, and manage reading process
Installation
npm install @supercat1337/textfile-reader// ES Modules
import { TextFileReader } from '@supercat1337/textfile-reader';Quick Start
import { TextFileReader } from '@supercat1337/textfile-reader';
const textFileReader = new TextFileReader();
// Open a file
textFileReader.openFile("./large_file.txt");
// Read from the last saved position
await textFileReader.read((line, lineNumber) => {
console.log(`${lineNumber}: ${line}`);
// Example: Stop after reaching a specific condition
if (lineNumber === 1000) {
textFileReader.stop();
}
});Advanced Usage
Starting from Scratch
import { TextFileReader } from '@supercat1337/textfile-reader';
const reader = new TextFileReader();
reader.openFile("./data.txt");
// Reset to start from beginning
reader.resetSettings();
await reader.read((line, lineNumber) => {
console.log(`Processing line ${lineNumber}`);
// Your processing logic here
});Pause and Resume Control
await reader.read(async (line, lineNumber) => {
console.log(`Line ${lineNumber}: ${line}`);
// Pause for external processing
if (lineNumber % 100 === 0) {
reader.pause();
await someExternalProcessing();
reader.resume();
}
});Get File Information
console.log(`Reading from: ${reader.getPath()}`);
console.log(`Current position: line ${reader.getCurrentLine()}`);
console.log(`Is reading: ${reader.isReading()}`);
console.log(`File opened: ${reader.isOpened()}`);
// Count total lines
const totalLines = await reader.countLines();
console.log(`Total lines in file: ${totalLines}`);API Reference
Constructor
new TextFileReader(saveSettingsEveryLine = 10)saveSettingsEveryLine- How often to save progress (in lines). Smaller values = more frequent saves but slightly slower performance.
Core Methods
openFile(path)
Opens a file for reading. Throws error if file doesn't exist or another file is already opened.
read(callback)
Main reading method. Calls the provided callback for each line.
callback(line: string, lineNumber: number)- Can be async or sync
countLines()
Returns Promise<number> - Counts total lines in the file efficiently.
resetSettings()
Resets reading position to line 0.
stop()
Immediately stops the reading process.
Control Methods
pause()
Pauses the reading process. Can be resumed with resume().
resume()
Resumes reading after pausing.
close()
Closes the file and cleans up resources.
Status Methods
getPath()
Returns current file path.
getCurrentLine()
Returns current reading position.
isReading()
Returns true if reading is in progress.
isOpened()
Returns true if a file is currently opened.
Settings File
The reader automatically creates a .settings.json file in the same directory as your source file:
- Filename:
yourfile.settings.json(replaces extension) - Format:
{"line": 123} - Purpose: Stores the last read line number for resumption
Error Handling
The reader automatically handles errors and saves progress:
try {
await reader.read((line, lineNumber) => {
// Your processing logic
});
} catch (error) {
console.error("Reading failed:", error);
// Progress is automatically saved - can resume later
}Performance Tips
- Use larger
saveSettingsEveryLinevalues (50-100) for better performance with very large files - The default 64KB chunk size balances memory usage and performance
- For massive files, consider pausing periodically to prevent memory buildup during intensive processing
Use Cases
- Log file processing - Resume from where you left off
- Data import pipelines - Handle interruptions gracefully
- Batch processing - Process large datasets in chunks
- Real-time monitoring - Follow growing files efficiently
- Data analysis - Process large text files without loading into memory
Example: Complete Workflow
import { TextFileReader } from '@supercat1337/textfile-reader';
const reader = new TextFileReader(50); // Save every 50 lines
try {
reader.openFile("./huge_dataset.csv");
console.log(`Resuming from line ${reader.getCurrentLine()}`);
await reader.read(async (line, lineNumber) => {
// Process each line
await processCSVLine(line);
// Progress reporting
if (lineNumber % 1000 === 0) {
console.log(`Processed ${lineNumber} lines...`);
}
});
console.log("File processing completed!");
} finally {
reader.close();
}