Skip to content
You signed in with another tab or window. to refresh your session.
You signed out in another tab or window. to refresh your session.
You switched accounts on another tab or window. to refresh your session.
Here are
4 public repositories
matching this topic...
Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker). Supporting OpenAI GPT-OSS, IBM Granite-4, Qwen-3-VL, Gemma-3n, Ministral-3, and more.
mech-interp suite for Granite4 models that use Mamba-2 architecture
A comparison of traditional RAG vs Graph RAG to see whether the results are better
Systematic study of LoRA fine-tuning strategies for IBM Granite 4.0-H-Micro (Mamba-2 + Transformer hybrid). Demonstrates the impact of architecture-aware target selection and SSM core parameter co-training, including analysis of PEFT serialization behavior. Reports up to 37% relative improvement over LoRA-only baselines.
Improve this page
Add a description, image, and links to the
granite4
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
granite4
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.