Update ollama/ollama Docker tag to v0.1.35 #156

Merged
mafyuh merged 1 commit from renovate/ollama-ollama-0.x into main 2024-05-10 23:17:44 -04:00
Collaborator

This PR contains the following updates:

Package Update Change
ollama/ollama patch 0.1.34 -> 0.1.35

Release Notes

ollama/ollama (ollama/ollama)

v0.1.35

Compare Source

New models

  • Llama 3 ChatQA: A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG).

What's Changed

  • Quantization: ollama create can now quantize models when importing them using the --quantize or -q flag:
ollama create -f Modelfile --quantize q4_0 mymodel

[!NOTE]
--quantize works when importing float16 or float32 models:

  • From a binary GGUF files (e.g. FROM ./model.gguf)
  • From a library model (e.g. FROM llama3:8b-instruct-fp16)
  • Fixed issue where inference subprocesses wouldn't be cleaned up on shutdown.
  • Fixed a series out of memory errors when loading models on multi-GPU systems
  • Ctrl+J characters will now properly add newlines in ollama run
  • Fixed issues when running ollama show for vision models
  • OPTIONS requests to the Ollama API will no longer result in errors
  • Fixed issue where partially downloaded files wouldn't be cleaned up
  • Added a new done_reason field in responses describing why generation stopped responding
  • Ollama will now more accurately estimate how much memory is available on multi-GPU systems especially when running different models one after another

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.1.34...v0.1.35


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

This PR contains the following updates: | Package | Update | Change | |---|---|---| | ollama/ollama | patch | `0.1.34` -> `0.1.35` | --- ### Release Notes <details> <summary>ollama/ollama (ollama/ollama)</summary> ### [`v0.1.35`](https://github.com/ollama/ollama/releases/tag/v0.1.35) [Compare Source](https://github.com/ollama/ollama/compare/v0.1.34...v0.1.35) #### New models - [Llama 3 ChatQA](https://ollama.com/library/llama3-chatqa): A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG). #### What's Changed - Quantization: `ollama create` can now quantize models when importing them using the `--quantize` or `-q` flag: <!----> ollama create -f Modelfile --quantize q4_0 mymodel > \[!NOTE] > `--quantize` works when importing `float16` or `float32` models: > > - From a binary GGUF files (e.g. `FROM ./model.gguf`) > - From a library model (e.g. `FROM llama3:8b-instruct-fp16`) - Fixed issue where inference subprocesses wouldn't be cleaned up on shutdown. - Fixed a series out of memory errors when loading models on multi-GPU systems - <kbd>Ctrl+J</kbd> characters will now properly add newlines in `ollama run` - Fixed issues when running `ollama show` for vision models - `OPTIONS` requests to the Ollama API will no longer result in errors - Fixed issue where partially downloaded files wouldn't be cleaned up - Added a new `done_reason` field in responses describing why generation stopped responding - Ollama will now more accurately estimate how much memory is available on multi-GPU systems especially when running different models one after another #### New Contributors - [@&#8203;fmaclen](https://github.com/fmaclen) made their first contribution in https://github.com/ollama/ollama/pull/3884 - [@&#8203;Renset](https://github.com/Renset) made their first contribution in https://github.com/ollama/ollama/pull/3881 - [@&#8203;glumia](https://github.com/glumia) made their first contribution in https://github.com/ollama/ollama/pull/3043 - [@&#8203;boessu](https://github.com/boessu) made their first contribution in https://github.com/ollama/ollama/pull/4236 - [@&#8203;gaardhus](https://github.com/gaardhus) made their first contribution in https://github.com/ollama/ollama/pull/2307 - [@&#8203;svilupp](https://github.com/svilupp) made their first contribution in https://github.com/ollama/ollama/pull/2192 - [@&#8203;WolfTheDeveloper](https://github.com/WolfTheDeveloper) made their first contribution in https://github.com/ollama/ollama/pull/4300 **Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.34...v0.1.35 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNTQuNSIsInVwZGF0ZWRJblZlciI6IjM3LjM1NC41IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
renovatebot added 1 commit 2024-05-10 23:08:30 -04:00
Update ollama/ollama Docker tag to v0.1.35
All checks were successful
continuous-integration/drone/pr Build is passing
4959d1262e
mafyuh merged commit b91805853a into main 2024-05-10 23:17:44 -04:00
mafyuh deleted branch renovate/ollama-ollama-0.x 2024-05-10 23:17:44 -04:00
This repo is archived. You cannot comment on pull requests.
No reviewers
No milestone
No project
No assignees
1 participant
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: mafyuh/Auto-Homelab#156
No description provided.