@zhadev/anichin
v0.0.4
Published
Unofficial Anichin Scraper.
Maintainers
Readme
@zhadev/anichin
Unofficial Anichin Scraper!
Features
- Api Coverage - All Anichin pages are neatly wrapped.
- Multi Platform - Works in Node.js, Browser, and Typescript.
- Fast & Efficient - Built with Axios and Cheerio.
- Type Safe - Full Typescript support.
- Customizable - Proxy support, rate limiting, retry mechanism, etc.
Installation
# npm
npm install @zhadev/anichin
# yarn
yarn add @zhadev/anichin
# pnpm
pnpm add @zhadev/anichinQuick Start
Node.js / Typescript
// ES Module
import AnichinScraper from '@zhadev/anichin';
// CommonJS
const AnichinScraper = require('@zhadev/anichin').default;
// Initialize
const scraper = new AnichinScraper();Browser (via CDN)
<script src="https://cdn.jsdelivr.net/npm/@zhadev/anichin/dist/javascript/browser.min.js"></script>
<script>
const scraper = new AnichinScraper();
scraper.search('renegade immortal').then(result => {
console.log(result.data.search.lists);
});
</script>With Configuration
import AnichinScraper from '@zhadev/anichin';
const scraper = new AnichinScraper({
baseUrl: 'https://anichin.cafe',
userAgent: 'Custom/1.0',
timeout: 15000,
maxRetries: 3,
retryDelay: 1000,
requestDelay: 500,
proxy: {
host: 'proxy.example.com',
port: 8080,
protocol: 'http',
auth: {
username: 'user',
password: 'pass'
}
}
});Api Reference
- Constructor Options
interface ScraperConfig {
baseUrl?: string; // Base URL (default: 'https://anichin.cafe')
userAgent?: string; // Custom User-Agent
timeout?: number; // Request timeout in ms (default: 30000)
maxRetries?: number; // Max retry attempts (default: 3)
retryDelay?: number; // Delay between retries in ms (default: 1000)
requestDelay?: number; // Delay between requests in ms (default: 1000)
proxy?: { // Proxy configuration
host: string;
port: number;
protocol?: 'http' | 'https' | 'socks' | 'socks5';
auth?: {
username: string;
password: string;
};
};
}- All Methods
// sidebar
const sidebar = await scraper.sidebar(); // returns quickfilter, popular series, ongoing series, etc.
// home
const home = await scraper.home(page?: number) // returns slider, latest release, popular today, recommendation, etc.
// search
const search = await scraper.search(query: string, page?: number); // returns query to search result.
// series detail
const detail = await scraper.series(slug: string); // returns full series info, episodes, download batch, etc.
// watching
const episode = await scraper.watch(slug: string, episodeNumber: number); // returns video servers, download links, episode navigation, etc.
// schedule
// all days
const schedule = await scraper.schedule();
// specific day
const monday = await scraper.schedule('monday'); // available: monday, tuesday, wednesday, thursday, friday, saturday, sunday
// lists
// ongoing series
const ongoing = await scraper.ongoing(page?: number);
// completed series
const completed = await scraper.completed(page?: number);
// a-z List
// all
const azlist = await scraper.azlist(page?: number);
// specific letter
const azlist = await scraper.azlist(page?: number, letter?: string); // example: await scraper.azlist(1, 'A')
// categories
// genres
const action = await scraper.genres(slug: string, page?: number); // example: scrape.genres('action', 1)
// seasons
const winter2024 = await scraper.season(slug: string); // example: scrape.season('winter-2025')
// studio
const studio = await scraper.studio(slug: string, page?: number); // example: scrape.studio('motion-magic', 1)
// networks
const network = await scraper.network(slug: string, page?: number); // example: scrape.network('iqiyi', 1)
// countries
const country = await scraper.country(slug: string, page?: number); // example: scrape.country('china', 1)
// advanced search (search donghua by filters)
// text mode (a-z listing)
const textMode = await scraper.advancedsearch('text');
// image mode
// without filters
const imageMode = await scraper.advancedsearch('image', page?: number);
// with filters
const filters = {
status: 'ongoing',
type: 'ona',
order: 'latest',
sub: 'sub',
genres: ['action', 'adventure'],
studios: ['studio-1', 'studio-2'],
seasons: ['fall-2025'],
per_page: 24
};
const imageMode = await scraper.advancedsearch('image', filters, page?: number);
// quick filter (available value for advanced search with filter)
const filters = await scraper.quickfilter(); // returns all available filter options for advanced searchResponse Format
All methods return a standardized response:
interface ApiResponse {
success: boolean; // Whether the request was successful
creator: string; // Creator name ('zhadevv')
data: any; // The scraped data
message: string | null; // Error message if any
}Response Example
- All Response Example in here.
Home:
{
"success": true,
"creator": "zhadevv",
"data": {
"home": {
"slider": [
{
"title": "Against the Sky Supreme",
"slug": "against-the-sky-supreme",
"thumbnail": "https://...",
"description": "Donghua description...",
"url": "https://anichin.cafe/seri/against-the-sky-supreme/"
}
],
"popular_today": [...],
"latest_release": [...],
"recommendation": {...}
}
},
"metadata": {},
"message": null
}Watch:
{
"success": true,
"creator": "zhadevv",
"data": {
"watch": {
"title": "Against the Sky Supreme Episode 100",
"slug": "against-the-sky-supreme",
"episode_number": "100",
"servers": [
{
"server_id": "0",
"server_name": "Server 1",
"server_url": "embed_url_here"
}
],
"downloads": [
{
"title": "Download Episode 100",
"qualities": [
{
"quality": "480p",
"links": [
{
"name": "Google Drive",
"url": "download_url"
}
]
}
]
}
]
}
},
"metadata": {},
"message": null
}Usage
Error Handling
try {
const result = await scraper.series('non-existent-slug');
if (!result.success) {
console.error('Error:', result.message);
}
} catch (error) {
console.error('Network error:', error);
}Pagination
// Get page 2 of ongoing series
const ongoing = await scraper.ongoing(2);
// Navigate through search results
const search1 = await scraper.search('donghua', 1);
const search2 = await scraper.search('donghua', 2);TypeScript Example
import AnichinScraper, { ApiResponse } from '@zhadev/anichin';
const scraper = new AnichinScraper();
async function getDonghuaInfo(slug: string): Promise<void> {
const response: ApiResponse = await scraper.series(slug);
if (response.success) {
const data = response.data.detail;
console.log(`Title: ${data.title}`);
console.log(`Episodes: ${data.episodes.length}`);
console.log(`Genres: ${data.genres.map(g => g.name).join(', ')}`);
}
}
getDonghuaInfo('swallowed-star');Compatibility
Supported Platforms
- Node.js: 16.0.0 or higher
- Browsers: Chrome 80+, Firefox 75+, Safari 13.1+, Edge 80+
- Frameworks: React, Vue, Angular, Svelte, Next.js, Nuxt.js
Build Targets
{
"cjs": "CommonJS for Node.js",
"esm": "ES Modules for modern bundlers",
"types": "TypeScript definitions",
"browser": "UMD bundle for browsers"
}Project Structure
@zhadev/anichin/
├── src/
│ └── Anichin.ts # Main library source
├── dist/
│ ├── cjs/ # CommonJS build
│ ├── esm/ # ES Module build
│ ├── types/ # TypeScript definitions
│ └── javascript/ # Browser bundles
├── examples/ # Usage examples
├── tests/ # Test suite
├── package.json
├── tsconfig.json
└── README.mdImportant Notes
- Educational Purpose: This library is for educational purposes only
- Respect ToS: Always respect the website's Terms of Service
- Rate Limiting: Implement proper delays to avoid overloading servers
- Caching: Cache responses to reduce repeated requests
- Legal Use: Use responsibly and comply with applicable laws
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch (git checkout -b feature/amazing-feature)
- Commit your changes (git commit -m 'Add amazing feature')
- Push to the branch (git push origin feature/amazing-feature)
- Open a Pull Request
Development Setup
# Clone the repository
git clone https://github.com/zhadevv/anichin.git
cd anichin
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Watch mode for development
npm run devLicense
This project is licensed under the MIT License.
MIT License
Copyright (c) 2025 zhadevv
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.Links
- NPM Package: @zhadev/anichin
- GitHub Repository: zhadevv/anichin
- Issue Tracker: GitHub Issues
- Change Log: Changelogs
Acknowledgements
- Anichin - For providing the content
- Axios - Promise based HTTP client
- Cheerio - Fast, flexible HTML parsing
- All contributors and users of this library
Disclaimer: This library is not affiliated with, maintained, authorized, endorsed or sponsored by Anichin or any of its affiliates. Use at your own risk.
