Tools, guides, and knowledge infrastructure for AI-assisted development β knowledge graphs, creative studio, Claude Code skills, RAG systems, and more.
The AI Hub is a collection of tools, documentation, and infrastructure for building serious AI-assisted development workflows. It covers the full stack: from how to instruct your coding agent effectively (CLAUDE.md), to building knowledge graphs that let agents navigate million-line codebases at 115x token efficiency, to a local-first creative studio for generating images, audio, and video.
Everything here is oriented around practical, local-first tooling. The knowledge graphs are real open-source benchmarks. The skills run in Claude Code today. Wyltek Studio runs on your own GPU.
graphify. Entity extraction, relationship mapping, community detection. Real benchmark: 47k nodes from 36 CKB repos, 115x token reduction vs naive RAG./graphify, /cut-subject, /graph-routing, /quote-meme, and more. Package specialist knowledge as portable, shareable skills.15+ models including SDXL, Flux, SD 1.5, Gemini image models, and more. Side-by-side compare mode for A/B model testing. LoRA support. Auto-scoring with CLIP aesthetic scoring. Prompt history with favourites.
Load any video file, scrub frame-by-frame with a timeline slider, and grab individual frames as PNG. Send directly to Image Tools for background removal or masking. Great for dataset creation and reference frames.
Background removal via rembg (5 models: u2net, isnet, u2net_human_seg, silueta, SAM). Mask tools: brush, rectangle, ellipse, lasso, and SAM click-to-segment for AI-assisted object selection. Object removal coming soon.
Four TTS engines: Piper (fast, offline), Kokoro (natural prosody), XTTS v2 (voice cloning β provide 6 seconds of audio to clone any voice), and Bark (expressive, multilingual). Export WAV or MP3.
MusicGen (Meta) for text-to-music generation. Continuation mode builds on an existing audio clip. Loop mode generates seamlessly tiling background tracks. Output as WAV.
AnimateDiff via ComfyUI for image-to-video animation. Projects module: timeline compositor for assembling image sequences, generated audio, TTS narration, and music into final video exports via FFmpeg.
graphify
Build queryable knowledge graphs from codebases, documentation, or any text corpus. Produces an interactive HTML viewer, GraphRAG-ready JSON, and a GRAPH_REPORT.md β in a single command.
pip install graphifyy # two y's β PyPI namespace
graphify . # build from current directory
graphify . --mode deep # thorough extraction
graphify --mcp # start MCP server mode
graph.json β GraphRAG-ready: nodes, edges, communities, metadata. Query it directly or via MCP.
graph.html β Self-contained interactive viewer. Force-directed layout, search, filter by community or type, click to inspect.
GRAPH_REPORT.md β Top communities by size, key entities, notable relationships. Use as a CLAUDE.md appendix.
Any input to knowledge graph β entity extraction, relationship mapping, community detection, interactive viewer + GraphRAG JSON.
Background removal via rembg. Auto-selects model by subject type (portrait, product, object). Batch capable.
Six-step pipeline: intercepts cross-repo and architecture questions, routes to the right graph registry entry, returns relevant subgraph before any grep/glob.
AI quote meme generator β takes a quote (or generates one), selects background, styles typography, outputs a web-shareable image via ComfyUI.
Captures embedded hardware test sessions as structured, graph-ingestible markdown. Auto-invokes after bring-up or debug sessions that conclude with a fix.
Generates a non-destructive check-deps.sh audit script for any project being prepared for public release. Reads package files, fills in tool requirements.
CKB-VM cycle benchmark tool. Reads ckb-bench.toml or auto-discovers contracts. Builds, measures via ckb-debugger, emits timestamped markdown reports with cycle counts, binary sizes, and comparison tables. Also available as a standalone Claude Code plugin.
Cycle-accurate benchmarks of SNARK-friendly and conventional hash functions on the CKB Virtual Machine (RISC-V rv64imc). First published Poseidon2 cycle counts on CKB-VM β answers "can CKB run a zkVM?" with real numbers.
Key finding: Poseidon2 over Goldilocks costs just 30K cycles per permutation β 0.04% of the 70M script limit. A zkVM verifier using Plonky3 is feasible on CKB.
Goldilocks Poseidon2: 30,170 cycles (64-bit field, Plonky3/RISC Zero)
BN254 Poseidon2: 5,961,994 cycles (256-bit, Groth16/PLONK)
BLS12-381 Poseidon2: 6,009,859 cycles (256-bit, Halo2/Zcash)
Blake2b-256: 14,094 cycles (CKB native ckbhash)
Keccak-256: 57,112 cycles (Ethereum compat)
/ingest with source, text, and title. Findings are embedded with FAISS and immediately queryable. Any agent or script can ingest research findings.Fine-tuned on CKB ecosystem documentation: the protocol, CKB-VM, Nervos SDK, transaction format, scripts, and related tooling. The model downloads once to your browser's cache and runs locally β no API keys, no rate limits, no server.
Technical guides for building effective AI agent workflows β from writing your first CLAUDE.md to architecting multi-graph knowledge routing systems.
Naive RAG retrieves text chunks by semantic similarity. Knowledge graphs model the actual relationships between entities β calls, imports, dependencies, implementations β and return the minimal relevant subgraph for any question. The result is dramatically fewer tokens with better cross-file accuracy.
1. Entity Extraction β Language-aware parsers (Rust, TypeScript, C, Python, Go, Markdown) extract functions, classes, modules, types, CLI commands, and API endpoints.
2. Relationship Mapping β Entities are connected: calls, imports, implements, extends, depends_on. Cross-file and cross-repo edges are resolved.
3. Community Detection β The Louvain algorithm groups tightly related entities into communities β these map naturally to architectural subsystems.
4. Output Generation β Interactive HTML viewer, GraphRAG JSON, GRAPH_REPORT.md. Optional Neo4j export.
A Claude Code skill that sits in front of every cross-repo and architecture question. When triggered, it:
~/.claude/graphs.jsonSeparate graphs for separate domains keeps queries fast and prevents cross-contamination. The registry lets the routing skill pick the right graph automatically.
# Install (note: two y's in the package name)
pip install graphifyy
# Build a graph from your project
graphify ~/projects/my-project --mode deep
# Outputs:
# graph.json β GraphRAG-ready, for AI agents
# graph.html β Interactive viewer, open in any browser
# GRAPH_REPORT.md β Community summary, key entities
# Start the MCP server (Claude Code native integration)
graphify --mcp --graph ./graph.json --port 3100
# Watch mode β auto-updates as files change
graphify . --watch
Open graph.html in any browser. No server required. Force-directed layout, community clustering, full-text search, filter by entity type, click any node to see file path, line number, docstring, and relationship count.
Serve it for team access: python3 -m http.server 8765 --directory ~/.claude/graphs/
All generative AI, running on your own GPU. No API keys required for generation. Everything from text-to-image to voice cloning to music generation to video animation, in one interface with a shared Projects system for assembling outputs into timelines.
15+ models available, routed through ComfyUI. SDXL, Flux (dev/schnell), Stable Diffusion 1.5, Gemini image API, and more. Compare mode renders the same prompt across two models side-by-side for quality comparison. LoRA support for custom fine-tuned models.
CLIP aesthetic scoring runs automatically on outputs, so you can sort and filter generations by quality. Prompt history with favourites for building repeatable prompt libraries.
Load any video file and extract individual frames as PNG. Scrub with a frame-by-frame timeline slider. Send frames directly to Image Tools for background removal, masking, or compositing. Built for dataset creation, reference extraction, and motion study.
/cut-subject skill wraps rembg and is available directly in Claude Code. Run /cut-subject ~/photos/image.jpg and Claude handles model selection, output naming, and verification automatically.
Meta's MusicGen for text-to-music generation. Describe a mood, genre, and instrumentation in natural language and get a full audio track. Continuation mode extends an existing clip. Loop mode generates seamlessly tiling background tracks for videos.
AnimateDiff via ComfyUI animates still images into short video clips. The Projects module provides a timeline compositor: drop in generated images, AnimateDiff clips, TTS narration, and MusicGen tracks, then export via FFmpeg to MP4.
An automated research pipeline where AI agents submit topics, a processor fetches source material and generates structured findings, and results are immediately available via REST API β and injected into Claude Code sessions via the RAG hook.
# List recent findings
GET /api/findings?limit=20&topic=blockchain
# Get a specific finding
GET /api/findings/:id
# Ingest a new finding (from agents or scripts)
POST /api/ingest
Content-Type: application/json
{
"source": "research-task-id",
"text": "Finding content...",
"title": "Title of the finding"
}
# Query the knowledge base
POST /api/query
Content-Type: application/json
{
"query": "CKB transaction fee estimation",
"agent": "kernel",
"k": 3
}