After loading my custom model - unsupportedTokenizer error

You’re now watching this thread. If you’ve opted in to email or web notifications, you’ll be notified when there’s activity. Click again to stop watching or visit your profile to manage watched threads and notifications.
You’ve stopped watching this thread and will no longer receive emails or web notifications when there’s activity. Click again to start watching.
Created 1w
Replies 1
Boosts 0
Views 191
Participants 1

In Oct25, using mlx_lm.lora I created an adapter and a fused model uploaded to Huggingface. I was able to incorporate this model into my SwiftUI app using the mlx package. MLX-libraries 2.25.8. My base LLM was mlx-community/Mistral-7B-Instruct-v0.3-4bit.

Looking at LLMModelFactory.swift the current version 2.29.1 the only changes are the addition of a few models.

The earlier model was called: pharmpk/pk-mistral-7b-v0.3-4bit The new model is called: pharmpk/pk-mistral-2026-03-29

The base model (mlx-community/Mistral-7B-Instruct-v0.3-4bit.) must still be available. Could the error 'unsupportedTokenizer' be related to changes in the mlx package? I noticed mention of splitting the package into two parts but don't see anything at github.

Feeling rather lost. Does anone have any thoguths and/or suggestions.

Thanks, David

Share this post
Copied to Clipboard
Replies  1
Boosts  0
Views  191
Participants  1

Same code with MLX libraries 2.25.8 but new model I get the same error. Might need to revisit the new model

0
Share this post
Copied to Clipboard
After loading my custom model - unsupportedTokenizer error
First post date Last post date
Q