liquidcn
v0.1.0
Published
Liquid Glass CN with secret sauce
Downloads
78
Maintainers
Readme
liquidcn - Reusable UI Components
A collection of reusable, accessible React UI components built with TypeScript, Tailwind CSS, and modern development tools.
Summary
liquidcn is a comprehensive component library featuring:
- UI Components: Button, Card, Alert, Badge, Input, Textarea, Footer, PrettyAmount
- Client Components: Dialog, Select, Switch, Tabs, Sonner (Toast), PrettyDate, ResizableNavbar, Slider, AudioVisualizer
- Form Components: FormBuilder (schema-driven forms with AI mode), ChatView (AI chat interface with voice input)
- Hooks: Custom React hooks including
useCookieWithFallback,useSpeechToText - Server Utilities:
createSpeechToTextHandler()for voice-to-text API routes - Utilities:
cn()utility for className merging using clsx and tailwind-merge
Bun + Npm + Typescript + Standard Version + Flat Config Linting + Husky + Commit / Release Pipeline
Check out the Changelog to see what changed in the last releases.
Install
bun add liquidcnInstall Bun ( bun is the default package manager for this project ( its optional ) ):
# Supported on macOS, Linux, and WSL
curl -fsSL https://bun.sh/install | bash
# Upgrade Bun every once in a while
bun upgrageUsage
Import Styles
Add the liquidcn styles to your project root or layout file:
import 'liquidcn/styles.css'Navbar Component
The ResizableNavbar component provides a responsive navigation bar with desktop and mobile support:
'use client'
import {
Navbar as ResizableNavbar,
NavBody,
NavItems,
MobileNav,
NavbarLogo,
NavbarButton,
MobileNavHeader,
MobileNavToggle,
MobileNavMenu,
} from 'liquidcn/client'
import { useState } from 'react'
import Link from 'next/link'
import { usePathname } from 'next/navigation'
const navItems = [
{ name: 'Projects', link: '/' },
{ name: 'Deploy', link: '/deploy' },
{ name: 'Docs', link: '/docs' },
]
export function Navbar({ className }: { className?: string }) {
const [isMobileMenuOpen, setIsMobileMenuOpen] = useState(false)
const pathname = usePathname()
return (
<ResizableNavbar className={className} menuOpen={isMobileMenuOpen}>
{/* Desktop Navigation */}
<NavBody>
<NavbarLogo imageSrc="/logo.png" />
<NavItems items={navItems} currentPath={pathname} />
<div className="flex items-center gap-4">
<NavbarButton>Your Content Here</NavbarButton>
</div>
</NavBody>
{/* Mobile Navigation */}
<MobileNav>
<MobileNavHeader>
<NavbarLogo imageSrc="/logo.png" />
<MobileNavToggle
isOpen={isMobileMenuOpen}
onClick={() => setIsMobileMenuOpen(!isMobileMenuOpen)}
/>
</MobileNavHeader>
<MobileNavMenu isOpen={isMobileMenuOpen}>
{navItems.map((item, idx) => (
<Link key={idx} href={item.link}>
{item.name}
</Link>
))}
</MobileNavMenu>
</MobileNav>
</ResizableNavbar>
)
}Footer Component
The Footer component displays links with icons and social media integration:
'use client'
import Link from 'next/link'
import { FileText } from 'lucide-react'
import { SiFarcaster } from 'react-icons/si'
import { FaSquareXTwitter } from 'react-icons/fa6'
import { Footer, type FooterLink } from 'liquidcn'
const footerLinks: FooterLink[] = [
{
name: 'Twitter / X',
href: 'https://x.com/yourhandle',
icon: FaSquareXTwitter,
showLabel: false,
},
{
name: 'Farcaster',
href: 'https://farcaster.xyz/yourhandle',
icon: SiFarcaster,
showLabel: false,
},
{
name: 'Documentation',
href: 'https://docs.example.com',
icon: FileText,
showLabel: true,
},
]
export function MyFooter() {
return (
<Footer
links={footerLinks}
builtByBrand="Your Brand"
linkComponent={Link}
/>
)
}FormBuilder Component
The FormBuilder component renders schema-driven forms with optional AI-powered form filling. Works with useSchemaForm from tanstack-effect.
Basic Usage
import { useSchemaForm } from 'tanstack-effect/client'
import { FormBuilder, FormValidationAlert, isFormValid } from 'liquidcn/client'
import { Button } from 'liquidcn'
import { Schema } from 'effect'
const ProjectSchema = Schema.Struct({
projectName: Schema.String.pipe(Schema.annotations({ description: 'Name of the project' })),
projectType: Schema.Literal('web', 'mobile', 'desktop').pipe(
Schema.annotations({ description: 'Type of project' })
),
teamSize: Schema.Number.pipe(Schema.annotations({ description: 'Number of team members' })),
})
function ProjectForm() {
const form = useSchemaForm({
schema: ProjectSchema,
})
return (
<div>
<FormBuilder form={form} variant="default" />
<Button onClick={() => console.log(form.data)}>Submit</Button>
<FormValidationAlert form={form} />
</div>
)
}With AI Mode
Enable AI-powered form filling by adding the ai option to useSchemaForm and enableAIMode to FormBuilder:
import { useSchemaForm } from 'tanstack-effect/client'
import { FormBuilder, FormValidationAlert } from 'liquidcn/client'
import { Button } from 'liquidcn'
import { Schema } from 'effect'
const ProjectSchema = Schema.Struct({
projectName: Schema.String.pipe(Schema.annotations({ description: 'Name of the project' })),
projectType: Schema.Literal('web', 'mobile', 'desktop').pipe(
Schema.annotations({ description: 'Type of project' })
),
teamSize: Schema.Number.pipe(Schema.annotations({ description: 'Number of team members' })),
})
function ProjectForm() {
const form = useSchemaForm({
schema: ProjectSchema,
// Enable AI form filling
ai: {
endpoint: '/api/ai-form-fill',
},
})
return (
<div>
<FormBuilder
form={form}
variant="wizard"
enableAIMode
aiPlaceholder="Describe your project..."
aiChatMinHeight="300px"
/>
<Button onClick={() => console.log(form.data)}>Submit</Button>
<FormValidationAlert form={form} />
</div>
)
}With AI mode enabled, the FormBuilder shows:
- AI/Edit toggle buttons - Switch between chat and manual editing
- Chat interface - Full conversation with AI including message history
- Clarification prompts - AI asks for missing required fields
- Summary - Shows what fields were filled (e.g. "Filled 3 fields: projectName, projectType, teamSize")
FormBuilder Props
| Prop | Type | Default | Description |
| ----------------- | ------------------------------------ | ----------- | ------------------------------- |
| form | UseSchemaFormReturn | required | Form state from useSchemaForm |
| variant | 'default' \| 'compact' \| 'wizard' | 'default' | Display variant |
| enableAIMode | boolean | false | Show AI/Edit mode toggle |
| initialMode | 'ai' \| 'edit' | 'ai' | Initial mode when AI is enabled |
| aiPlaceholder | string | - | Placeholder for AI chat input |
| aiChatMinHeight | string | '300px' | Minimum height for chat view |
| pinnedFields | string[] | [] | Fields to show at top level |
| hiddenFields | string[] | [] | Fields to hide from form |
ChatView Component
The ChatView component provides a standalone chat UI for AI interactions with optional voice input:
import { ChatView } from 'liquidcn/client'
function AIChat() {
const [messages, setMessages] = useState([])
return (
<ChatView
messages={messages}
status="idle"
onSend={(msg) => console.log('Send:', msg)}
placeholder="Ask me anything..."
enableVoice={true} // Enable when OPENAI_API_KEY is configured on server
/>
)
}ChatView Props
| Prop | Type | Default | Description |
| --------------- | ---------- | --------------------- | ------------------------------------------------ |
| messages | array | required | Messages in the conversation |
| status | string | required | AI status: 'idle', 'filling', 'clarifying', etc. |
| onSend | function | required | Callback when user sends a message |
| placeholder | string | - | Placeholder text for input |
| enableVoice | boolean | false | Enable voice input (requires server API key) |
| voiceEndpoint | string | /api/speech-to-text | Custom endpoint for speech-to-text API |
Speech-to-Text (Voice Input)
LiquidCN provides speech-to-text capabilities for voice input in chat interfaces.
Server Setup
First, install the optional AI SDK dependencies:
bun add ai @ai-sdk/openaiCreate a speech-to-text API route using the handler:
// app/api/speech-to-text/route.ts
import { createSpeechToTextHandler } from 'liquidcn'
import { auth } from '@/auth'
const handler = createSpeechToTextHandler({
authenticate: async () => {
const session = await auth()
return !!session?.user
},
defaultModel: 'gpt-4o-transcribe', // optional
})
export const POST = handlerSet the OPENAI_API_KEY environment variable on your server.
Client Usage
The useSpeechToText hook can be used standalone:
import { useSpeechToText } from 'liquidcn/client'
function VoiceInput() {
const { transcribe, isLoading, data, error } = useSpeechToText({
onSuccess: (result) => console.log('Transcribed:', result.text),
onError: (err) => console.error('Error:', err.error),
})
const handleRecord = async (audioFile: File) => {
await transcribe({ audio: audioFile })
}
return (
<div>
{isLoading && <p>Transcribing...</p>}
{data && <p>Result: {data.text}</p>}
</div>
)
}AudioVisualizer Component
Display audio waveform during recording:
import { AudioVisualizer } from 'liquidcn/client'
;<AudioVisualizer
isRecording={isRecording}
mediaRecorder={mediaRecorder}
width={280}
height={60}
barColor="#a78bfa"
/>Developing
Install Dependencies:
bun iWatching TS Problems:
bun watchFormat / Lint / Type Check:
bun format
bun lint
bun type-checkHow to make a release
For the Maintainer: Add NPM_TOKEN to the GitHub Secrets.
- PR with changes
- Merge PR into main
- Checkout main
git pullbun release: '' | alpha | betaoptionally add-- --release-as minor | major | 0.0.1- Make sure everything looks good (e.g. in CHANGELOG.md)
- Lastly run
bun release:pub - Done
License
This package is licensed - see the LICENSE file for details.
