Archive for the ‘AI’ Category
Intel Gaudi 3 AI Accelerator – delivers greater speed scalability and developer productivity
Donnerstag, September 26th, 2024HP EliteBook X G1a AI PC – CPU Ryzen AI 300 40W TDP und großem Akku angekündigt zum Januar 2025
Mittwoch, September 25th, 2024Ollama v0.3.11 – what’s changed
Mittwoch, September 25th, 2024
Ollama v0.3.11 – what’s changed
show newest model information
root@pve-ai-llm-12:~# docker exec -it ollama ollama -h
Large language model runner
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
stop Stop a running model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, –help help for ollama
-v, –version Show version information
Use „ollama [command] –help“ for more information about a command.
root@pve-ai-llm-12:~#
root@pve-ai-llm-12:~# docker exec -it ollama ollama show llama3.1
Model
architecture llama
parameters 8.0B
context length 131072
embedding length 4096
quantization Q4_0
Parameters
stop „<|start_header_id|>“
stop „<|end_header_id|>“
stop „<|eot_id|>“
License
LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024
root@pve-ai-llm-12:~#
root@pve-ai-llm-12:~# docker exec -it ollama ollama list
NAME ID SIZE MODIFIED
phi3.5:latest 3b387c8dd9b7 2.2 GB 4 weeks ago
llama3.1:latest 62757c860e01 4.7 GB 8 weeks ago
root@pve-ai-llm-12:~#
root@pve-ai-llm-12:~# docker exec -it ollama ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.1:latest 62757c860e01 6.2 GB 100% CPU 3 minutes from now
root@pve-ai-llm-12:~#
root@pve-ai-llm-12:~# docker exec -it ollama ollama stop llama.31
root@pve-ai-llm-12:~#
root@pve-ai-llm-12:~# docker exec -it ollama ollama –version
ollama version is 0.3.11
root@pve-ai-llm-12:~#
Meta Llama 3.1 70B – GPU Anforderungen FP32 FP16 INT8 und INT4
Dienstag, September 24th, 2024Run all your AI locally – in minutes with LLMs RAG and more
Montag, September 23rd, 2024Ollama – with N8N an extendable workflow automation tool
Sonntag, September 22nd, 2024N8N – self hosted AI Starter Kit
Sonntag, September 22nd, 2024AMD Ryzen AI 9 HX 375 – fastest Neural Engine ever capable of up to 55 Trillion Operations Per Second (TOPS)
Sonntag, September 22nd, 2024
Ollama v0.3.11 – newest release and uncover one incredible feature that have been highly anticipated for ages
Samstag, September 21st, 2024Ollama & Open WebUI – how to write code for Python ‚DocString‘
Freitag, September 20th, 2024Google Coral Tensor Processing Unit (TPU) M.2 PCIe – Installation in Frigate 0.14 LXC on Proxmox Virtual Environment (VE) 8.x
Donnerstag, September 19th, 2024ChatGPT-o1 – im Test für Informatiker
Dienstag, September 17th, 2024WordLlama – a fast lightweight NLP toolkit that handles tasks like fuzzy deduplication similarity and ranking
Dienstag, September 17th, 2024Aoostar Gem10 370 Mini PC – AMD Ryzen AI 9 HX 370 12 Cores / 24 Threads / 5.1 GHz
Sonntag, September 15th, 2024
Aoostar – Gem10 370 Mini PC die integrierte NPU liefert bis zu 50 TOPS



