42628596b2 
								
							 
						 
						
							
							
								
								ci: bump actions/checkout to v4 ( #2077 )  
							
							
							
						 
						
							2024-09-09 08:53:13 +02:00  
				
					
						
							
							
								 
						
							
								7603b3627d 
								
							 
						 
						
							
							
								
								fix: Rectify ffmpy poetry config; update version from 0.3.2 to 0.4.0 ( #2062 )  
							
							... 
							
							
							
							* Fix: Rectify ffmpy 0.3.2 poetry config
* keep optional set to false for ffmpy
* Updating ffmpy to version 0.4.0
* Remove comment about a fix 
							
						 
						
							2024-08-21 10:39:58 +02:00  
				
					
						
							
							
								 
						
							
								89477ea9d3 
								
							 
						 
						
							
							
								
								fix: naming image and ollama-cpu ( #2056 )  
							
							
							
						 
						
							2024-08-12 08:23:16 +02:00  
				
					
						
							
							
								 
						
							
								22904ca8ad 
								
							 
						 
						
							
							
								
								chore(main): release 0.6.2 ( #2049 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-08-08 18:16:41 +02:00  
				
					
						
							
							
								 
						
							
								7fefe408b4 
								
							 
						 
						
							
							
								
								fix: auto-update version ( #2052 )  
							
							
							
						 
						
							2024-08-08 16:50:42 +02:00  
				
					
						
							
							
								 
						
							
								b1acf9dc2c 
								
							 
						 
						
							
							
								
								fix: publish image name ( #2043 )  
							
							
							
						 
						
							2024-08-07 17:39:32 +02:00  
				
					
						
							
							
								 
						
							
								4ca6d0cb55 
								
							 
						 
						
							
							
								
								fix: add numpy issue to troubleshooting ( #2048 )  
							
							... 
							
							
							
							* docs: add numpy issue to troubleshooting
* fix: troubleshooting link
... 
							
						 
						
							2024-08-07 12:16:03 +02:00  
				
					
						
							
							
								 
						
							
								b16abbefe4 
								
							 
						 
						
							
							
								
								fix: update matplotlib to 3.9.1-post1 to fix win install  
							
							... 
							
							
							
							* chore: block matplotlib to fix installation in window machines
* chore: remove workaround, just update poetry.lock
* fix: update matplotlib to last version 
							
						 
						
							2024-08-07 11:26:42 +02:00  
				
					
						
							
							
								 
						
							
								ca2b8da69c 
								
							 
						 
						
							
							
								
								chore(main): release 0.6.1 ( #2041 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-08-05 17:17:34 +02:00  
				
					
						
							
							
								 
						
							
								f09f6dd255 
								
							 
						 
						
							
							
								
								fix: add built image from DockerHub ( #2042 )  
							
							... 
							
							
							
							* chore: update docker-compose with profiles
* docs: add quick start doc
* chore: generate docker release when new version is released
* chore: add dockerhub image in docker-compose
* docs: update quickstart with local/remote images
* chore: update docker tag
* chore: refactor dockerfile names
* chore: update docker-compose names
* docs: update llamacpp naming
* fix: naming
* docs: fix llamacpp command 
							
						 
						
							2024-08-05 17:15:38 +02:00  
				
					
						
							
							
								 
						
							
								1c665f7900 
								
							 
						 
						
							
							
								
								fix: Adding azopenai to model list ( #2035 )  
							
							... 
							
							
							
							Fixing the error I encountered while using the azopenai mode 
							
						 
						
							2024-08-05 16:30:10 +02:00  
				
					
						
							
							
								 
						
							
								1d4c14d7a3 
								
							 
						 
						
							
							
								
								fix(deploy): generate docker release when new version is released ( #2038 )  
							
							
							
						 
						
							2024-08-05 16:28:19 +02:00  
				
					
						
							
							
								 
						
							
								dae0727a1b 
								
							 
						 
						
							
							
								
								fix(deploy): improve Docker-Compose and quickstart on Docker ( #2037 )  
							
							... 
							
							
							
							* chore: update docker-compose with profiles
* docs: add quick start doc 
							
						 
						
							2024-08-05 16:28:19 +02:00  
				
					
						
							
							
								 
						
							
								6674b46fea 
								
							 
						 
						
							
							
								
								chore(main): release 0.6.0 ( #1834 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-08-02 11:28:22 +02:00  
				
					
						
							
							
								 
						
							
								e44a7f5773 
								
							 
						 
						
							
							
								
								chore: bump version ( #2033 )  
							
							
							
						 
						
							2024-08-02 11:26:03 +02:00  
				
					
						
							
							
								 
						
							
								cf61bf780f 
								
							 
						 
						
							
							
								
								feat(llm): add progress bar when ollama is pulling models ( #2031 )  
							
							... 
							
							
							
							* fix: add ollama progress bar when pulling models
* feat: add ollama queue
* fix: mypy 
							
						 
						
							2024-08-01 19:14:26 +02:00  
				
					
						
							
							
								 
						
							
								50b3027a24 
								
							 
						 
						
							
							
								
								docs: update docs and capture ( #2029 )  
							
							... 
							
							
							
							* docs: update Readme
* style: refactor image
* docs: change important to tip 
							
						 
						
							2024-08-01 10:01:22 +02:00  
				
					
						
							
							
								 
						
							
								54659588b5 
								
							 
						 
						
							
							
								
								fix: nomic embeddings ( #2030 )  
							
							... 
							
							
							
							* fix: allow to configure trust_remote_code
based on: https://github.com/zylon-ai/private-gpt/issues/1893#issuecomment-2118629391 
* fix: nomic hf embeddings 
							
						 
						
							2024-08-01 09:43:30 +02:00  
				
					
						
							
							
								 
						
							
								8119842ae6 
								
							 
						 
						
							
							
								
								feat(recipe): add our first recipe  `Summarize` ( #2028 )  
							
							... 
							
							
							
							* feat: add summary recipe
* test: add summary tests
* docs: move all recipes docs
* docs: add recipes and summarize doc
* docs: update openapi reference
* refactor: split method in two method (summary)
* feat: add initial summarize ui
* feat: add mode explanation
* fix: mypy
* feat: allow to configure async property in summarize
* refactor: move modes to enum and update mode explanations
* docs: fix url
* docs: remove list-llm pages
* docs: remove double header
* fix: summary description 
							
						 
						
							2024-07-31 16:53:27 +02:00  
				
					
						
							
							
								 
						
							
								40638a18a5 
								
							 
						 
						
							
							
								
								fix: unify embedding models ( #2027 )  
							
							... 
							
							
							
							* feat: unify embedding model to nomic
* docs: add embedding dimensions mismatch
* docs: fix fern 
							
						 
						
							2024-07-31 14:35:46 +02:00  
				
					
						
							
							
								 
						
							
								9027d695c1 
								
							 
						 
						
							
							
								
								feat: make llama3.1 as default ( #2022 )  
							
							... 
							
							
							
							* feat: change ollama default model to llama3.1
* chore: bump versions
* feat: Change default model in local mode to llama3.1
* chore: make sure last poetry version is used
* fix: mypy
* fix: do not add BOS (with last llamacpp-python version) 
							
						 
						
							2024-07-31 14:35:36 +02:00  
				
					
						
							
							
								 
						
							
								e54a8fe043 
								
							 
						 
						
							
							
								
								fix: prevent to ingest local files (by default) ( #2010 )  
							
							... 
							
							
							
							* feat: prevent to local ingestion (by default) and add white-list
* docs: add local ingestion warning
* docs: add missing comment
* fix: update exception error
* fix: black 
							
						 
						
							2024-07-31 14:33:46 +02:00  
				
					
						
							
							
								 
						
							
								1020cd5328 
								
							 
						 
						
							
							
								
								fix: light mode ( #2025 )  
							
							
							
						 
						
							2024-07-31 12:59:31 +02:00  
				
					
						
							
							
								 
						
							
								65c5a1708b 
								
							 
						 
						
							
							
								
								chore(docker): dockerfiles improvements and fixes ( #1792 )  
							
							... 
							
							
							
							* `UID` and `GID` build arguments for `worker` user
* `POETRY_EXTRAS` build argument with default values
* Copy `Makefile` for `make ingest` command
* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image
* Fix PYTHONPATH value
* Set home directory to `/home/worker` when creating user
* Combine `ENV` instructions together
* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml
* `PGPT_EMBEDDING_MODE` to define embedding mode
* Remove ineffective `python3 -m pipx ensurepath`
* Use `&&` instead of `;` to chain commands to detect failure better
* Add `--no-root` flag to poetry install commands
* Set PGPT_PROFILES to docker
* chore: remove envs
* chore: update to use ollama in docker-compose
* chore: don't copy makefile
* chore: don't copy fern
* fix: tiktoken cache
* fix: docker compose port
* fix: ffmpy dependency (#2020 )
* fix: ffmpy dependency
* fix: block ffmpy to commit sha
* feat(llm): autopull ollama models (#2019 )
* chore: update ollama (llm)
* feat: allow to autopull ollama models
* fix: mypy
* chore: install always ollama client
* refactor: check connection and pull ollama method to utils
* docs: update ollama config with autopulling info
...
* chore: autopull ollama models
* chore: add GID/UID comment
...
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-30 17:59:38 +02:00  
				
					
						
							
							
								 
						
							
								d080969407 
								
							 
						 
						
							
							
								
								added llama3 prompt ( #1962 )  
							
							... 
							
							
							
							* added llama3 prompt
* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14 
* fix: new llama3 prompt
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-29 17:28:00 +02:00  
				
					
						
							
							
								 
						
							
								d4375d078f 
								
							 
						 
						
							
							
								
								fix(ui): gradio bug fixes ( #2021 )  
							
							... 
							
							
							
							* fix: when two user messages were sent
* fix: add source divider
* fix: add favicon
* fix: add zylon link
* refactor: update label 
							
						 
						
							2024-07-29 16:48:16 +02:00  
				
					
						
							
							
								 
						
							
								20bad17c98 
								
							 
						 
						
							
							
								
								feat(llm): autopull ollama models ( #2019 )  
							
							... 
							
							
							
							* chore: update ollama (llm)
* feat: allow to autopull ollama models
* fix: mypy
* chore: install always ollama client
* refactor: check connection and pull ollama method to utils
* docs: update ollama config with autopulling info 
							
						 
						
							2024-07-29 13:25:42 +02:00  
				
					
						
							
							
								 
						
							
								dabf556dae 
								
							 
						 
						
							
							
								
								fix: ffmpy dependency ( #2020 )  
							
							... 
							
							
							
							* fix: ffmpy dependency
* fix: block ffmpy to commit sha 
							
						 
						
							2024-07-29 11:56:57 +02:00  
				
					
						
							
							
								 
						
							
								05a986231c 
								
							 
						 
						
							
							
								
								Add proper param to demo urls ( #2007 )  
							
							
							
						 
						
							2024-07-22 14:44:03 +02:00  
				
					
						
							
							
								 
						
							
								b62669784b 
								
							 
						 
						
							
							
								
								docs: update welcome page ( #2004 )  
							
							
							
						 
						
							2024-07-18 14:42:39 +02:00  
				
					
						
							
							
								 
						
							
								2c78bb2958 
								
							 
						 
						
							
							
								
								docs: add PR and issue templates ( #2002 )  
							
							... 
							
							
							
							* chore: add pull request template
* chore: add issue templates
* chore: require more information in bugs 
							
						 
						
							2024-07-18 12:56:10 +02:00  
				
					
						
							
							
								 
						
							
								90d211c5cd 
								
							 
						 
						
							
							
								
								Update README.md ( #2003 )  
							
							... 
							
							
							
							* Update README.md
Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT
* Update README.md
Update text to address the comments
* Update README.md
Improve text 
							
						 
						
							2024-07-18 12:11:24 +02:00  
				
					
						
							
							
								 
						
							
								43cc31f740 
								
							 
						 
						
							
							
								
								feat(vectordb): Milvus vector db Integration ( #1996 )  
							
							... 
							
							
							
							* integrate Milvus into Private GPT
* adjust milvus settings
* update doc info and reformat
* adjust milvus initialization
* adjust import error
* mionr update
* adjust format
* adjust the db storing path
* update doc 
							
						 
						
							2024-07-18 10:55:45 +02:00  
				
					
						
							
							
								 
						
							
								4523a30c8f 
								
							 
						 
						
							
							
								
								feat(docs): update documentation and fix preview-docs ( #2000 )  
							
							... 
							
							
							
							* docs: add missing configurations
* docs: change HF embeddings by ollama
* docs: add disclaimer about Gradio UI
* docs: improve readability in concepts
* docs: reorder `Fully Local Setups`
* docs: improve setup instructions
* docs: prevent have duplicate documentation and use table to show different options
* docs: rename privateGpt to PrivateGPT
* docs: update ui image
* docs: remove useless header
* docs: convert to alerts ingestion disclaimers
* docs: add UI alternatives
* docs: reference UI alternatives in disclaimers
* docs: fix table
* chore: update doc preview version
* chore: add permissions
* chore: remove useless line
* docs: fixes
... 
							
						 
						
							2024-07-18 10:06:51 +02:00  
				
					
						
							
							
								 
						
							
								01b7ccd064 
								
							 
						 
						
							
							
								
								fix(config): make tokenizer optional and include a troubleshooting doc ( #1998 )  
							
							... 
							
							
							
							* docs: add troubleshooting
* fix: pass HF token to setup script and prevent to download tokenizer when it is empty
* fix: improve log and disable specific tokenizer by default
* chore: change HF_TOKEN environment to be aligned with default config
* ifx: mypy 
							
						 
						
							2024-07-17 10:06:27 +02:00  
				
					
						
							
							
								 
						
							
								15f73dbc48 
								
							 
						 
						
							
							
								
								docs: update repo links, citations ( #1990 )  
							
							... 
							
							
							
							* docs: update project links
...
* docs: update citation 
							
						 
						
							2024-07-09 10:03:57 +02:00  
				
					
						
							
							
								 
						
							
								187bc9320e 
								
							 
						 
						
							
							
								
								(feat): add github button ( #1989 )  
							
							... 
							
							
							
							Co-authored-by: chdeskur <chdeskur@gmail.com> 
							
						 
						
							2024-07-09 08:48:47 +02:00  
				
					
						
							
							
								 
						
							
								dde02245bc 
								
							 
						 
						
							
							
								
								fix(docs): Fix concepts.mdx referencing to installation page ( #1779 )  
							
							... 
							
							
							
							* Fix/update concepts.mdx referencing to installation page
The link for `/installation` is broken in the "Main Concepts" page.
The correct path would be `./installation` or  maybe `/installation/getting-started/installation`
* fix: docs
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:19:50 +02:00  
				
					
						
							
							
								 
						
							
								067a5f144c 
								
							 
						 
						
							
							
								
								feat(docs): Fix setup docu ( #1926 )  
							
							... 
							
							
							
							* Update settings.mdx
* docs: add cmd
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:19:16 +02:00  
				
					
						
							
							
								 
						
							
								2612928839 
								
							 
						 
						
							
							
								
								feat(vectorstore): Add clickhouse support as vectore store ( #1883 )  
							
							... 
							
							
							
							* Added ClickHouse vector sotre support
* port fix
* updated lock file
* fix: mypy
* fix: mypy
---------
Co-authored-by: Valery Denisov <valerydenisov@double.cloud>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:18:22 +02:00  
				
					
						
							
							
								 
						
							
								fc13368bc7 
								
							 
						 
						
							
							
								
								feat(llm): Support for Google Gemini LLMs and Embeddings ( #1965 )  
							
							... 
							
							
							
							* Support for Google Gemini LLMs and Embeddings
Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)
Install via
poetry install --extras "llms-gemini embeddings-gemini"
Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...
* fix: crash when gemini is not selected
* docs: add gemini llm
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 11:47:36 +02:00  
				
					
						
							
							
								 
						
							
								19a7c065ef 
								
							 
						 
						
							
							
								
								feat(docs): update doc for ipex-llm ( #1968 )  
							
							
							
						 
						
							2024-07-08 09:42:44 +02:00  
				
					
						
							
							
								 
						
							
								b687dc8524 
								
							 
						 
						
							
							
								
								feat: bump dependencies ( #1987 )  
							
							
							
						 
						
							2024-07-05 16:31:13 +02:00  
				
					
						
							
							
								 
						
							
								c7212ac7cc 
								
							 
						 
						
							
							
								
								fix(LLM): mistral ignoring assistant messages ( #1954 )  
							
							... 
							
							
							
							* fix: mistral ignoring assistant messages
* fix: typing
* fix: fix tests 
							
						 
						
							2024-05-30 15:41:16 +02:00  
				
					
						
							
							
								 
						
							
								3b3e96ad6c 
								
							 
						 
						
							
							
								
								Allow parameterizing OpenAI embeddings component (api_base, key, model) ( #1920 )  
							
							... 
							
							
							
							* Allow parameterizing OpenAI embeddings component (api_base, key, model)
* Update settings
* Update description 
							
						 
						
							2024-05-17 09:52:50 +02:00  
				
					
						
							
							
								 
						
							
								45df99feb7 
								
							 
						 
						
							
							
								
								Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). ( #1858 )  
							
							... 
							
							
							
							feat(llm): Improve settings of the OpenAILike LLM 
							
						 
						
							2024-05-10 16:44:08 +02:00  
				
					
						
							
							
								 
						
							
								966af4771d 
								
							 
						 
						
							
							
								
								fix(settings): enable cors by default so it will work when using ts sdk (spa) ( #1925 )  
							
							
							
						 
						
							2024-05-10 14:13:46 +02:00  
				
					
						
							
							
								 
						
							
								d13029a046 
								
							 
						 
						
							
							
								
								feat(docs): add privategpt-ts sdk ( #1924 )  
							
							
							
						 
						
							2024-05-10 14:13:15 +02:00  
				
					
						
							
							
								 
						
							
								9d0d614706 
								
							 
						 
						
							
							
								
								fix: Replacing unsafe `eval()` with `json.loads()` ( #1890 )  
							
							
							
						 
						
							2024-04-30 09:58:19 +02:00  
				
					
						
							
							
								 
						
							
								e21bf20c10 
								
							 
						 
						
							
							
								
								feat: prompt_style applied to all LLMs + extra LLM params. ( #1835 )  
							
							... 
							
							
							
							* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.
* Removed prompt_style from llamacpp entirely
* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp. 
							
						 
						
							2024-04-30 09:53:10 +02:00