⬆️ Update docker.mafyuh.xyz/ollama/ollama Docker tag to v0.1.39 #243

Merged
mafyuh merged 1 commit from renovate/docker.mafyuh.xyz-ollama-ollama-0.x into main 2024-05-24 08:47:21 -04:00
Collaborator

This PR contains the following updates:

Package Update Change
docker.mafyuh.xyz/ollama/ollama patch 0.1.38 -> 0.1.39

Release Notes

ollama/ollama (docker.mafyuh.xyz/ollama/ollama)

v0.1.39

Compare Source

New models

  • Cohere Aya 23: A new state-of-the-art, multilingual LLM covering 23 different languages.
  • Mistral 7B 0.3: A new version of Mistral 7B with initial support for function calling.
  • Phi-3 Medium: a 14B parameters, lightweight, state-of-the-art open model by Microsoft.

Llama 3 import

It is now possible to import and quantize Llama 3 and its finetunes from Safetensors format to Ollama.

First, clone a Hugging Face repo with a Safetensors model:

git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
cd Meta-Llama-3-8B-Instruct

Next, create a Modelfile:

FROM .

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

Then, create and quantize a model:

ollama create --quantize q4_0 -f Modelfile my-llama3 
ollama run my-llama3

What's Changed

  • Fixed issues where wide characters such as Chinese, Korean, Japanese and Russian languages.
  • Added new OLLAMA_NOHISTORY=1 environment variable that can be set to disable history when using ollama run
  • New experimental OLLAMA_FLASH_ATTENTION=1 flag for ollama serve that improves token generation speed on Apple Silicon Macs and NVIDIA graphics cards
  • Fixed error that would occur on Windows running ollama create -f Modelfile

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.1.38...v0.1.39-rc1


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

This PR contains the following updates: | Package | Update | Change | |---|---|---| | docker.mafyuh.xyz/ollama/ollama | patch | `0.1.38` -> `0.1.39` | --- ### Release Notes <details> <summary>ollama/ollama (docker.mafyuh.xyz/ollama/ollama)</summary> ### [`v0.1.39`](https://github.com/ollama/ollama/releases/tag/v0.1.39) [Compare Source](https://github.com/ollama/ollama/compare/v0.1.38...v0.1.39) #### New models - [Cohere Aya 23](https://ollama.com/library/aya): A new state-of-the-art, multilingual LLM covering 23 different languages. - [Mistral 7B 0.3](https://ollama.com/library/mistral:v0.3): A new version of Mistral 7B with initial support for function calling. - [Phi-3 Medium](https://ollama.com/library/phi3:medium): a 14B parameters, lightweight, state-of-the-art open model by Microsoft. #### Llama 3 import It is now possible to import and quantize Llama 3 and its finetunes from Safetensors format to Ollama. First, clone a Hugging Face repo with a Safetensors model: git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct cd Meta-Llama-3-8B-Instruct Next, create a `Modelfile`: FROM . TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> Then, create and quantize a model: ollama create --quantize q4_0 -f Modelfile my-llama3 ollama run my-llama3 #### What's Changed - Fixed issues where wide characters such as Chinese, Korean, Japanese and Russian languages. - Added new `OLLAMA_NOHISTORY=1` environment variable that can be set to disable history when using `ollama run` - New experimental `OLLAMA_FLASH_ATTENTION=1` flag for `ollama serve` that improves token generation speed on Apple Silicon Macs and NVIDIA graphics cards - Fixed error that would occur on Windows running `ollama create -f Modelfile` #### New Contributors - [@&#8203;rapmd73](https://github.com/rapmd73) made their first contribution in https://github.com/ollama/ollama/pull/4467 - [@&#8203;sammcj](https://github.com/sammcj) made their first contribution in https://github.com/ollama/ollama/pull/4120 - [@&#8203;likejazz](https://github.com/likejazz) made their first contribution in https://github.com/ollama/ollama/pull/4535 **Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.38...v0.1.39-rc1 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNzQuMyIsInVwZGF0ZWRJblZlciI6IjM3LjM3NC4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
renovatebot added 1 commit 2024-05-24 04:28:35 -04:00
⬆️ Update docker.mafyuh.xyz/ollama/ollama Docker tag to v0.1.39
All checks were successful
continuous-integration/drone/pr Build is passing
612626a1c7
mafyuh merged commit 2c3678eb58 into main 2024-05-24 08:47:21 -04:00
mafyuh deleted branch renovate/docker.mafyuh.xyz-ollama-ollama-0.x 2024-05-24 08:47:21 -04:00
CD-Bot reviewed 2024-05-24 08:47:44 -04:00
CD-Bot left a comment
Collaborator

Continuous Deployment successfully ran.

Git Logs:

Updating 0a0e928..2c3678e
Fast-forward
AI/docker-compose.yml | 2 +-
README.md | 8 +++++++-
ag-backup/docker-compose.yml | 2 +-
ag-main/docker-compose.yml | 2 +-
4 files changed, 10 insertions(+), 4 deletions(-)

Docker Compose Logs:

time="2024-05-24T08:47:29-04:00" level=warning msg="/home/mafyuh/Auto-Homelab/AI/docker-compose.yml: version is obsolete"
ollama Pulling
ollama Pulled
Container mindsdb Running
Container open-webui Running
Container ollama Recreate
Container ollama Recreated
Container ollama Starting
Container ollama Started

## Continuous Deployment successfully ran. ### Git Logs: Updating 0a0e928..2c3678e Fast-forward AI/docker-compose.yml | 2 +- README.md | 8 +++++++- ag-backup/docker-compose.yml | 2 +- ag-main/docker-compose.yml | 2 +- 4 files changed, 10 insertions(+), 4 deletions(-) ### Docker Compose Logs: time="2024-05-24T08:47:29-04:00" level=warning msg="/home/mafyuh/Auto-Homelab/AI/docker-compose.yml: `version` is obsolete" ollama Pulling ollama Pulled Container mindsdb Running Container open-webui Running Container ollama Recreate Container ollama Recreated Container ollama Starting Container ollama Started
This repo is archived. You cannot comment on pull requests.
No reviewers
No milestone
No project
No assignees
2 participants
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: mafyuh/Auto-Homelab#243
No description provided.