@proj-airi/lobe-icons
v1.0.14
Published
Iconify JSON IconSet port for @lobehub/icons
Readme
Popular AI / LLM Model Brand SVG Logo and Icon Collection. See them all on one page at lobehub.com/icons. Contributions, corrections & requests can be made on their GitHub repository.
This enables you to use Lobe Icons in UnoCSS or any Iconify compatible scenario.
[!NOTE]
This project is part of (and also associate to) the Project AIRI, we aim to build a LLM-driven VTuber like Neuro-sama (subscribe if you didn't!) if you are interested in, please do give it a try on live demo.
Installation
Pick the package manager of your choice:
ni @proj-airi/lobe-icons -D # from @antfu/ni, can be installed via `npm i -g @antfu/ni`
pnpm i @proj-airi/lobe-icons @iconify/utils -D
yarn i @proj-airi/lobe-icons @iconify/utils -D
npm i @proj-airi/lobe-icons @iconify/utils -DUnoCSS usage
import { createExternalPackageIconLoader } from '@iconify/utils/lib/loader/external-pkg'
import { presetIcons } from 'unocss'
export default defineConfig({
presets: [
// Other presets
presetIcons({
scale: 1.2,
collections: {
...createExternalPackageIconLoader('@proj-airi/lobe-icons'),
},
}),
],
})Other side projects born from Project AIRI
- Awesome AI VTuber: A curated list of AI VTubers and related projects
unspeech: Universal endpoint proxy server for/audio/transcriptionsand/audio/speech, like LiteLLM but for any ASR and TTShfup: tools to help on deploying, bundling to HuggingFace Spacesxsai-transformers: Experimental 🤗 Transformers.js provider for xsAI.- WebAI: Realtime Voice Chat: Full example of implementing ChatGPT's realtime voice from scratch with VAD + STT + LLM + TTS.
@proj-airi/drizzle-duckdb-wasm: Drizzle ORM driver for DuckDB WASM@proj-airi/duckdb-wasm: Easy to use wrapper for@duckdb/duckdb-wasm- Airi Factorio: Allow Airi to play Factorio
- Factorio RCON API: RESTful API wrapper for Factorio headless server console
autorio: Factorio automation librarytstl-plugin-reload-factorio-mod: Reload Factorio mod when developing- Velin: Use Vue SFC and Markdown to write easy to manage stateful prompts for LLM
demodel: Easily boost the speed of pulling your models and datasets from various of inference runtimes.inventory: Centralized model catalog and default provider configurations backend service- MCP Launcher: Easy to use MCP builder & launcher for all possible MCP servers, just like Ollama for models!
- 🥺 SAD: Documentation and notes for self-host and browser running LLMs.
