npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

serper

v1.0.6

Published

A client for the Serper Google Search API.

Downloads

78

Readme

Serper for JavaScript/TypeScript

npm version npm downloads GitHub Docs Workflow Status

Scrape Google search results using Serper, the fastest and most affordable SERP API.

Quick Start

Node.js

  • Supports most modern Node.js versions.
# npm
npm install serper
# yarn
yarn add serper
# pnpm
pnpm add serper
import { Serper } from "serper"; // ES Modules
const { Serper } = require("serper"); // CommonJS Modules

const serper = new Serper({
  apiKey: process.env.SERPER_API_KEY // Get your API key at https://serper.dev/api-key
});

const results = await serper.search("search terms");

Features

  • TypeScript support.
  • Works in Node.js and the browser.
  • Promises and async/await support.

Configuration

Configuration is very simple; just three things.

  • The required API key.
  • An optional request timeout.
  • A toggle for the cache.
const serper = new Serper({
  apiKey: process.env.SERPER_API_KEY, // Your API key, this is required
  timeout: 10000, // Request timeout in milliseconds, 10000 by default
  doCache: true // Enable to cache responses, true by default
});

Basic Usage

The client usage is just as simple as what was shown in the quick start. Simply initializing a client, then making requests with async methods.

Currently, you can search under all supported Serper API routes. They are:

  • Search - Typical rich search page.
  • News - Current Google news articles only.
  • Images - Just images and links.
  • Videos - Just videos and links.
  • Places - Google maps locations and information.

All of these can be called using the exact same client API and the respective name of the route as the function name. By replacing "search" in the quick start function, you can run any search imaginable!

Pagination

Pagination is built in to Serper with three simple functions, nextPage, prevPage and toPage. Currently, due to API limitations, there is no indicator in nextPage as to the end of all pages. The prevPage function just returns the first page if you try to go negative. The self explanatory toPage results in the specified page.

All client responses contain nextPage and prevPage, as well as all pagination responses.

let results = await serper.search("dog shelters");
for (let x = 0; x < 5; x++) {
  results = await results.nextPage();
}

Caching

All responses from the Serper Client are locally cached to reduce credit usage during pagination and repeated searching. The cache can easily be disabled by setting the cache config value to false, although this is not recommended as it will significantly slow down your application, and should only be used when current information (ie. news) is needed.

Documentation and Examples

API docs are available online here, and a large collection of usage examples are available in the examples directory.

Note: All examples assume top level await, which is supported in modern Node.js and most modern browers.

To Do

  • [x] Implement pagination.
  • [ ] A more robust Response object with caching.
  • [ ] Better request prechecking.
  • [ ] External (ie. Redis) caching.
  • [ ] Unit tests.
  • [x] Documentation.
  • [ ] Deno support.
  • [ ] Module bundler.
  • [ ] Any other ideas you may have.

License

This code is released under the permissible MIT license. Anyone can contribute, use, redistribute, and sell this library without any credit, although it is appreciated.