1 post
@lobsters omlx: LLM inference server with continuous batching & SSD caching for Apple Silicon -- managed from the macOS menu bar #LLM #AppleSilicon #OpenSource
Enter the Mastodon instance your account is hosted at.
Don't have an account? Find a server at joinmastodon.org