Update dependency ollama/ollama to v0.11.11
This MR contains the following updates:
Package | Type | Update | Change |
---|---|---|---|
ollama/ollama | patch |
0.11.10 -> 0.11.11
|
|
ollama/ollama | ironbank-github | patch |
v0.11.10 -> v0.11.11
|
⚠️ WarningSome dependencies could not be looked up. Check the warning logs for more information.
Release Notes
ollama/ollama (ollama/ollama)
v0.11.11
What's Changed
- Support for CUDA 13
- Improved memory usage when using gpt-oss in Ollama's app
- Better scrolling better in Ollama's app when submitting long prompts
- Cmd +/- will now zoom and shrink text in Ollama's app
- Assistant messages can now by copied in Ollama's app
- Fixed error that would occur when attempting to import satefensor files by @rick-github in https://github.com/ollama/ollama/pull/12176
- Improved memory estimates for hybrid and recurrent models by @gabe-l-hart in https://github.com/ollama/ollama/pull/12186
- Fixed error that would occur when when batch size was greater than context length
- Flash attention & KV cache quantization validation fixes by @jessegross in https://github.com/ollama/ollama/pull/12231
- Add
dimensions
field to embed requests by @mxyng in https://github.com/ollama/ollama/pull/12242 - Enable new memory estimates in Ollama's new engine by default by @jessegross in https://github.com/ollama/ollama/pull/12252
- Ollama will no longer load split vision models in the Ollama engine by @jessegross in https://github.com/ollama/ollama/pull/12241
New Contributors
- @KashyapTan made their first contribution in https://github.com/ollama/ollama/pull/12188
- @carbonatedWaterOrg made their first contribution in https://github.com/ollama/ollama/pull/12230
- @fengyuchuanshen made their first contribution in https://github.com/ollama/ollama/pull/12249
Full Changelog: https://github.com/ollama/ollama/compare/v0.11.10...v0.11.11
Configuration
-
If you want to rebase/retry this MR, check this box
This MR has been generated by Renovate Bot.