npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@intentsolutionsio/coreweave-pack

v1.0.0

Published

Claude Code skill pack for CoreWeave - 24 skills covering GPU cloud infrastructure, ML workloads, and HPC

Downloads

138

Readme

CoreWeave Skill Pack

24 production-grade Claude Code skills for GPU cloud computing with CoreWeave Kubernetes Service

What Is CoreWeave?

CoreWeave is a specialized GPU cloud platform built for AI/ML workloads. CoreWeave Kubernetes Service (CKS) runs Kubernetes directly on bare-metal GPU nodes -- no hypervisor, no VMs. The platform provides:

  • Bare-metal GPU nodes with A100, H100, L40, and GH200 GPUs
  • KServe integration for serverless inference with scale-to-zero
  • Multi-node training with NVLink and InfiniBand interconnect
  • Shared storage (HDD, SSD, NVMe) via Kubernetes PVCs
  • DCGM metrics for GPU utilization monitoring out of the box

Access is via standard kubectl with CoreWeave-issued kubeconfig. GPU scheduling uses node affinity with gpu.nvidia.com/class labels. Typical cost savings: 30-50% compared to hyperscaler GPU instances.

This skill pack provides real kubectl commands, YAML manifests, and Python patterns for every stage of CoreWeave deployment.

Installation

/plugin install coreweave-pack@claude-code-plugins-plus

Skills Included

Getting Started (S01-S04)

| Skill | Description | |-------|-------------| | coreweave-install-auth | Kubeconfig setup, API token, GPU access verification | | coreweave-hello-world | First GPU pod: vLLM inference server and batch CUDA job | | coreweave-local-dev-loop | Container build, YAML validation, deploy-watch cycle | | coreweave-sdk-patterns | GPU affinity helpers, inference client, deployment generators |

Core Workflows (S05-S08)

| Skill | Description | |-------|-------------| | coreweave-core-workflow-a | KServe InferenceService with autoscaling and scale-to-zero | | coreweave-core-workflow-b | Distributed GPU training with PyTorch DDP and shared storage | | coreweave-common-errors | Pod Pending, CUDA OOM, NCCL timeout, image pull failures | | coreweave-debug-bundle | Collect node status, GPU allocation, and pod logs for support |

Operations (S09-S12)

| Skill | Description | |-------|-------------| | coreweave-rate-limits | GPU quota management and inference request queuing | | coreweave-security-basics | Secrets for model tokens, network policies, RBAC | | coreweave-prod-checklist | Production readiness for inference and training workloads | | coreweave-upgrade-migration | GPU type migration (A100 to H100), CUDA version upgrades |

Pro Skills (P13-P18)

| Skill | Description | |-------|-------------| | coreweave-ci-integration | GitHub Actions for container build and CKS deployment | | coreweave-deploy-integration | Helm charts and Kustomize overlays for GPU deployments | | coreweave-webhooks-events | Kubernetes event monitoring, GPU metrics, Slack alerts | | coreweave-performance-tuning | GPU selection, vLLM batching, HPA with DCGM metrics | | coreweave-cost-tuning | GPU pricing comparison, scale-to-zero, quantization savings | | coreweave-reference-architecture | Multi-model inference platform architecture |

Flagship Skills (F19-F24)

| Skill | Description | |-------|-------------| | coreweave-multi-env-setup | Dev/staging/prod with different GPU types and quotas | | coreweave-observability | DCGM GPU metrics, Prometheus alerts, Grafana dashboards | | coreweave-incident-runbook | GPU workload failure triage and remediation | | coreweave-data-handling | PVC storage classes, model downloading, dataset management | | coreweave-enterprise-rbac | Namespace isolation, GPU quotas per team, role bindings | | coreweave-migration-deep-dive | Migrate from AWS/GCP GPU instances to CoreWeave CKS |

Quick Start

1. Install the Pack

/plugin install coreweave-pack@claude-code-plugins-plus

2. Configure kubectl

Download your kubeconfig from cloud.coreweave.com and set it up:

export KUBECONFIG=~/.kube/coreweave
kubectl get nodes

3. Deploy Your First GPU Workload

# Run nvidia-smi on an A100
kubectl run gpu-test --image=nvidia/cuda:12.2.0-base-ubuntu22.04 \
  --restart=Never \
  --overrides='{"spec":{"containers":[{"name":"gpu-test","image":"nvidia/cuda:12.2.0-base-ubuntu22.04","command":["nvidia-smi"],"resources":{"limits":{"nvidia.com/gpu":"1"}}}]}}' \
  -- nvidia-smi

kubectl logs gpu-test
kubectl delete pod gpu-test

4. Deploy an Inference Service

Follow coreweave-core-workflow-a to deploy a KServe InferenceService with autoscaling.

Key CoreWeave Links

License

MIT