npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@supercat1337/textfile-reader

v1.0.2

Published

A utility for reading large text files line by line. It saves the state of the last read line number in a file, allowing it to resume reading from the same position in case of an error.

Readme

TextFile Reader

The TextFileReader is a high-performance utility for reading large text files line by line with state persistence. It saves the last read position in a settings file, allowing seamless resumption after interruptions, errors, or application restarts.

Features

  • 📖 Efficient large file reading - Handles files of any size using stream processing
  • 💾 State persistence - Automatically saves reading progress to resume later
  • ⏸️ Flow control - Pause and resume reading at any time
  • 🔄 Flexible starting points - Start from beginning, last position, or any specific line
  • 🛡️ Error recovery - Robust error handling with automatic state saving
  • 📊 Utility methods - Count lines, check status, and manage reading process

Installation

npm install @supercat1337/textfile-reader
// ES Modules
import { TextFileReader } from '@supercat1337/textfile-reader';

Quick Start

import { TextFileReader } from '@supercat1337/textfile-reader';

const textFileReader = new TextFileReader();

// Open a file
textFileReader.openFile("./large_file.txt");

// Read from the last saved position
await textFileReader.read((line, lineNumber) => {
    console.log(`${lineNumber}: ${line}`);
    
    // Example: Stop after reaching a specific condition
    if (lineNumber === 1000) {
        textFileReader.stop();
    }
});

Advanced Usage

Starting from Scratch

import { TextFileReader } from '@supercat1337/textfile-reader';

const reader = new TextFileReader();
reader.openFile("./data.txt");

// Reset to start from beginning
reader.resetSettings();

await reader.read((line, lineNumber) => {
    console.log(`Processing line ${lineNumber}`);
    // Your processing logic here
});

Pause and Resume Control

await reader.read(async (line, lineNumber) => {
    console.log(`Line ${lineNumber}: ${line}`);
    
    // Pause for external processing
    if (lineNumber % 100 === 0) {
        reader.pause();
        await someExternalProcessing();
        reader.resume();
    }
});

Get File Information

console.log(`Reading from: ${reader.getPath()}`);
console.log(`Current position: line ${reader.getCurrentLine()}`);
console.log(`Is reading: ${reader.isReading()}`);
console.log(`File opened: ${reader.isOpened()}`);

// Count total lines
const totalLines = await reader.countLines();
console.log(`Total lines in file: ${totalLines}`);

API Reference

Constructor

new TextFileReader(saveSettingsEveryLine = 10)
  • saveSettingsEveryLine - How often to save progress (in lines). Smaller values = more frequent saves but slightly slower performance.

Core Methods

openFile(path)

Opens a file for reading. Throws error if file doesn't exist or another file is already opened.

read(callback)

Main reading method. Calls the provided callback for each line.

  • callback(line: string, lineNumber: number) - Can be async or sync

countLines()

Returns Promise<number> - Counts total lines in the file efficiently.

resetSettings()

Resets reading position to line 0.

stop()

Immediately stops the reading process.

Control Methods

pause()

Pauses the reading process. Can be resumed with resume().

resume()

Resumes reading after pausing.

close()

Closes the file and cleans up resources.

Status Methods

getPath()

Returns current file path.

getCurrentLine()

Returns current reading position.

isReading()

Returns true if reading is in progress.

isOpened()

Returns true if a file is currently opened.

Settings File

The reader automatically creates a .settings.json file in the same directory as your source file:

  • Filename: yourfile.settings.json (replaces extension)
  • Format: {"line": 123}
  • Purpose: Stores the last read line number for resumption

Error Handling

The reader automatically handles errors and saves progress:

try {
    await reader.read((line, lineNumber) => {
        // Your processing logic
    });
} catch (error) {
    console.error("Reading failed:", error);
    // Progress is automatically saved - can resume later
}

Performance Tips

  • Use larger saveSettingsEveryLine values (50-100) for better performance with very large files
  • The default 64KB chunk size balances memory usage and performance
  • For massive files, consider pausing periodically to prevent memory buildup during intensive processing

Use Cases

  • Log file processing - Resume from where you left off
  • Data import pipelines - Handle interruptions gracefully
  • Batch processing - Process large datasets in chunks
  • Real-time monitoring - Follow growing files efficiently
  • Data analysis - Process large text files without loading into memory

Example: Complete Workflow

import { TextFileReader } from '@supercat1337/textfile-reader';

const reader = new TextFileReader(50); // Save every 50 lines

try {
    reader.openFile("./huge_dataset.csv");
    
    console.log(`Resuming from line ${reader.getCurrentLine()}`);
    
    await reader.read(async (line, lineNumber) => {
        // Process each line
        await processCSVLine(line);
        
        // Progress reporting
        if (lineNumber % 1000 === 0) {
            console.log(`Processed ${lineNumber} lines...`);
        }
    });
    
    console.log("File processing completed!");
} finally {
    reader.close();
}