9027d695c1 
								
							 
						 
						
							
							
								
								feat: make llama3.1 as default ( #2022 )  
							
							... 
							
							
							
							* feat: change ollama default model to llama3.1
* chore: bump versions
* feat: Change default model in local mode to llama3.1
* chore: make sure last poetry version is used
* fix: mypy
* fix: do not add BOS (with last llamacpp-python version) 
							
						 
						
							2024-07-31 14:35:36 +02:00  
				
					
						
							
							
								 
						
							
								65c5a1708b 
								
							 
						 
						
							
							
								
								chore(docker): dockerfiles improvements and fixes ( #1792 )  
							
							... 
							
							
							
							* `UID` and `GID` build arguments for `worker` user
* `POETRY_EXTRAS` build argument with default values
* Copy `Makefile` for `make ingest` command
* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image
* Fix PYTHONPATH value
* Set home directory to `/home/worker` when creating user
* Combine `ENV` instructions together
* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml
* `PGPT_EMBEDDING_MODE` to define embedding mode
* Remove ineffective `python3 -m pipx ensurepath`
* Use `&&` instead of `;` to chain commands to detect failure better
* Add `--no-root` flag to poetry install commands
* Set PGPT_PROFILES to docker
* chore: remove envs
* chore: update to use ollama in docker-compose
* chore: don't copy makefile
* chore: don't copy fern
* fix: tiktoken cache
* fix: docker compose port
* fix: ffmpy dependency (#2020 )
* fix: ffmpy dependency
* fix: block ffmpy to commit sha
* feat(llm): autopull ollama models (#2019 )
* chore: update ollama (llm)
* feat: allow to autopull ollama models
* fix: mypy
* chore: install always ollama client
* refactor: check connection and pull ollama method to utils
* docs: update ollama config with autopulling info
...
* chore: autopull ollama models
* chore: add GID/UID comment
...
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-30 17:59:38 +02:00  
				
					
						
							
							
								 
						
							
								f83abff8bc 
								
							 
						 
						
							
							
								
								feat(docker): set default Docker to use Ollama ( #1812 )  
							
							
							
						 
						
							2024-04-01 13:08:48 +02:00  
				
					
						
							
							
								 
						
							
								45f05711eb 
								
							 
						 
						
							
							
								
								feat: Upgrade to LlamaIndex to 0.10 ( #1663 )  
							
							... 
							
							
							
							* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters 
							
						 
						
							2024-03-06 17:51:30 +01:00  
				
					
						
							
							
								 
						
							
								fde2b942bc 
								
							 
						 
						
							
							
								
								fix(deploy): fix local and external dockerfiles  
							
							
							
						 
						
							2023-12-22 14:16:46 +01:00  
				
					
						
							
							
								 
						
							
								059f35840a 
								
							 
						 
						
							
							
								
								fix(docker): docker broken copy ( #1419 )  
							
							
							
						 
						
							2023-12-18 16:55:18 +01:00  
				
					
						
							
							
								 
						
							
								0d677e10b9 
								
							 
						 
						
							
							
								
								feat: move torch and transformers to local group ( #1172 )  
							
							
							
						 
						
							2023-11-06 14:24:16 +01:00