50b3027a24 
								
							 
						 
						
							
							
								
								docs: update docs and capture ( #2029 )  
							
							... 
							
							
							
							* docs: update Readme
* style: refactor image
* docs: change important to tip 
							
						 
						
							2024-08-01 10:01:22 +02:00  
				
					
						
							
							
								 
						
							
								54659588b5 
								
							 
						 
						
							
							
								
								fix: nomic embeddings ( #2030 )  
							
							... 
							
							
							
							* fix: allow to configure trust_remote_code
based on: https://github.com/zylon-ai/private-gpt/issues/1893#issuecomment-2118629391 
* fix: nomic hf embeddings 
							
						 
						
							2024-08-01 09:43:30 +02:00  
				
					
						
							
							
								 
						
							
								8119842ae6 
								
							 
						 
						
							
							
								
								feat(recipe): add our first recipe  `Summarize` ( #2028 )  
							
							... 
							
							
							
							* feat: add summary recipe
* test: add summary tests
* docs: move all recipes docs
* docs: add recipes and summarize doc
* docs: update openapi reference
* refactor: split method in two method (summary)
* feat: add initial summarize ui
* feat: add mode explanation
* fix: mypy
* feat: allow to configure async property in summarize
* refactor: move modes to enum and update mode explanations
* docs: fix url
* docs: remove list-llm pages
* docs: remove double header
* fix: summary description 
							
						 
						
							2024-07-31 16:53:27 +02:00  
				
					
						
							
							
								 
						
							
								40638a18a5 
								
							 
						 
						
							
							
								
								fix: unify embedding models ( #2027 )  
							
							... 
							
							
							
							* feat: unify embedding model to nomic
* docs: add embedding dimensions mismatch
* docs: fix fern 
							
						 
						
							2024-07-31 14:35:46 +02:00  
				
					
						
							
							
								 
						
							
								9027d695c1 
								
							 
						 
						
							
							
								
								feat: make llama3.1 as default ( #2022 )  
							
							... 
							
							
							
							* feat: change ollama default model to llama3.1
* chore: bump versions
* feat: Change default model in local mode to llama3.1
* chore: make sure last poetry version is used
* fix: mypy
* fix: do not add BOS (with last llamacpp-python version) 
							
						 
						
							2024-07-31 14:35:36 +02:00  
				
					
						
							
							
								 
						
							
								e54a8fe043 
								
							 
						 
						
							
							
								
								fix: prevent to ingest local files (by default) ( #2010 )  
							
							... 
							
							
							
							* feat: prevent to local ingestion (by default) and add white-list
* docs: add local ingestion warning
* docs: add missing comment
* fix: update exception error
* fix: black 
							
						 
						
							2024-07-31 14:33:46 +02:00  
				
					
						
							
							
								 
						
							
								1020cd5328 
								
							 
						 
						
							
							
								
								fix: light mode ( #2025 )  
							
							
							
						 
						
							2024-07-31 12:59:31 +02:00  
				
					
						
							
							
								 
						
							
								65c5a1708b 
								
							 
						 
						
							
							
								
								chore(docker): dockerfiles improvements and fixes ( #1792 )  
							
							... 
							
							
							
							* `UID` and `GID` build arguments for `worker` user
* `POETRY_EXTRAS` build argument with default values
* Copy `Makefile` for `make ingest` command
* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image
* Fix PYTHONPATH value
* Set home directory to `/home/worker` when creating user
* Combine `ENV` instructions together
* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml
* `PGPT_EMBEDDING_MODE` to define embedding mode
* Remove ineffective `python3 -m pipx ensurepath`
* Use `&&` instead of `;` to chain commands to detect failure better
* Add `--no-root` flag to poetry install commands
* Set PGPT_PROFILES to docker
* chore: remove envs
* chore: update to use ollama in docker-compose
* chore: don't copy makefile
* chore: don't copy fern
* fix: tiktoken cache
* fix: docker compose port
* fix: ffmpy dependency (#2020 )
* fix: ffmpy dependency
* fix: block ffmpy to commit sha
* feat(llm): autopull ollama models (#2019 )
* chore: update ollama (llm)
* feat: allow to autopull ollama models
* fix: mypy
* chore: install always ollama client
* refactor: check connection and pull ollama method to utils
* docs: update ollama config with autopulling info
...
* chore: autopull ollama models
* chore: add GID/UID comment
...
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-30 17:59:38 +02:00  
				
					
						
							
							
								 
						
							
								d080969407 
								
							 
						 
						
							
							
								
								added llama3 prompt ( #1962 )  
							
							... 
							
							
							
							* added llama3 prompt
* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14 
* fix: new llama3 prompt
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-29 17:28:00 +02:00  
				
					
						
							
							
								 
						
							
								d4375d078f 
								
							 
						 
						
							
							
								
								fix(ui): gradio bug fixes ( #2021 )  
							
							... 
							
							
							
							* fix: when two user messages were sent
* fix: add source divider
* fix: add favicon
* fix: add zylon link
* refactor: update label 
							
						 
						
							2024-07-29 16:48:16 +02:00  
				
					
						
							
							
								 
						
							
								20bad17c98 
								
							 
						 
						
							
							
								
								feat(llm): autopull ollama models ( #2019 )  
							
							... 
							
							
							
							* chore: update ollama (llm)
* feat: allow to autopull ollama models
* fix: mypy
* chore: install always ollama client
* refactor: check connection and pull ollama method to utils
* docs: update ollama config with autopulling info 
							
						 
						
							2024-07-29 13:25:42 +02:00  
				
					
						
							
							
								 
						
							
								dabf556dae 
								
							 
						 
						
							
							
								
								fix: ffmpy dependency ( #2020 )  
							
							... 
							
							
							
							* fix: ffmpy dependency
* fix: block ffmpy to commit sha 
							
						 
						
							2024-07-29 11:56:57 +02:00  
				
					
						
							
							
								 
						
							
								05a986231c 
								
							 
						 
						
							
							
								
								Add proper param to demo urls ( #2007 )  
							
							
							
						 
						
							2024-07-22 14:44:03 +02:00  
				
					
						
							
							
								 
						
							
								b62669784b 
								
							 
						 
						
							
							
								
								docs: update welcome page ( #2004 )  
							
							
							
						 
						
							2024-07-18 14:42:39 +02:00  
				
					
						
							
							
								 
						
							
								2c78bb2958 
								
							 
						 
						
							
							
								
								docs: add PR and issue templates ( #2002 )  
							
							... 
							
							
							
							* chore: add pull request template
* chore: add issue templates
* chore: require more information in bugs 
							
						 
						
							2024-07-18 12:56:10 +02:00  
				
					
						
							
							
								 
						
							
								90d211c5cd 
								
							 
						 
						
							
							
								
								Update README.md ( #2003 )  
							
							... 
							
							
							
							* Update README.md
Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT
* Update README.md
Update text to address the comments
* Update README.md
Improve text 
							
						 
						
							2024-07-18 12:11:24 +02:00  
				
					
						
							
							
								 
						
							
								43cc31f740 
								
							 
						 
						
							
							
								
								feat(vectordb): Milvus vector db Integration ( #1996 )  
							
							... 
							
							
							
							* integrate Milvus into Private GPT
* adjust milvus settings
* update doc info and reformat
* adjust milvus initialization
* adjust import error
* mionr update
* adjust format
* adjust the db storing path
* update doc 
							
						 
						
							2024-07-18 10:55:45 +02:00  
				
					
						
							
							
								 
						
							
								4523a30c8f 
								
							 
						 
						
							
							
								
								feat(docs): update documentation and fix preview-docs ( #2000 )  
							
							... 
							
							
							
							* docs: add missing configurations
* docs: change HF embeddings by ollama
* docs: add disclaimer about Gradio UI
* docs: improve readability in concepts
* docs: reorder `Fully Local Setups`
* docs: improve setup instructions
* docs: prevent have duplicate documentation and use table to show different options
* docs: rename privateGpt to PrivateGPT
* docs: update ui image
* docs: remove useless header
* docs: convert to alerts ingestion disclaimers
* docs: add UI alternatives
* docs: reference UI alternatives in disclaimers
* docs: fix table
* chore: update doc preview version
* chore: add permissions
* chore: remove useless line
* docs: fixes
... 
							
						 
						
							2024-07-18 10:06:51 +02:00  
				
					
						
							
							
								 
						
							
								01b7ccd064 
								
							 
						 
						
							
							
								
								fix(config): make tokenizer optional and include a troubleshooting doc ( #1998 )  
							
							... 
							
							
							
							* docs: add troubleshooting
* fix: pass HF token to setup script and prevent to download tokenizer when it is empty
* fix: improve log and disable specific tokenizer by default
* chore: change HF_TOKEN environment to be aligned with default config
* ifx: mypy 
							
						 
						
							2024-07-17 10:06:27 +02:00  
				
					
						
							
							
								 
						
							
								15f73dbc48 
								
							 
						 
						
							
							
								
								docs: update repo links, citations ( #1990 )  
							
							... 
							
							
							
							* docs: update project links
...
* docs: update citation 
							
						 
						
							2024-07-09 10:03:57 +02:00  
				
					
						
							
							
								 
						
							
								187bc9320e 
								
							 
						 
						
							
							
								
								(feat): add github button ( #1989 )  
							
							... 
							
							
							
							Co-authored-by: chdeskur <chdeskur@gmail.com> 
							
						 
						
							2024-07-09 08:48:47 +02:00  
				
					
						
							
							
								 
						
							
								dde02245bc 
								
							 
						 
						
							
							
								
								fix(docs): Fix concepts.mdx referencing to installation page ( #1779 )  
							
							... 
							
							
							
							* Fix/update concepts.mdx referencing to installation page
The link for `/installation` is broken in the "Main Concepts" page.
The correct path would be `./installation` or  maybe `/installation/getting-started/installation`
* fix: docs
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:19:50 +02:00  
				
					
						
							
							
								 
						
							
								067a5f144c 
								
							 
						 
						
							
							
								
								feat(docs): Fix setup docu ( #1926 )  
							
							... 
							
							
							
							* Update settings.mdx
* docs: add cmd
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:19:16 +02:00  
				
					
						
							
							
								 
						
							
								2612928839 
								
							 
						 
						
							
							
								
								feat(vectorstore): Add clickhouse support as vectore store ( #1883 )  
							
							... 
							
							
							
							* Added ClickHouse vector sotre support
* port fix
* updated lock file
* fix: mypy
* fix: mypy
---------
Co-authored-by: Valery Denisov <valerydenisov@double.cloud>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 16:18:22 +02:00  
				
					
						
							
							
								 
						
							
								fc13368bc7 
								
							 
						 
						
							
							
								
								feat(llm): Support for Google Gemini LLMs and Embeddings ( #1965 )  
							
							... 
							
							
							
							* Support for Google Gemini LLMs and Embeddings
Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)
Install via
poetry install --extras "llms-gemini embeddings-gemini"
Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...
* fix: crash when gemini is not selected
* docs: add gemini llm
---------
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> 
							
						 
						
							2024-07-08 11:47:36 +02:00  
				
					
						
							
							
								 
						
							
								19a7c065ef 
								
							 
						 
						
							
							
								
								feat(docs): update doc for ipex-llm ( #1968 )  
							
							
							
						 
						
							2024-07-08 09:42:44 +02:00  
				
					
						
							
							
								 
						
							
								b687dc8524 
								
							 
						 
						
							
							
								
								feat: bump dependencies ( #1987 )  
							
							
							
						 
						
							2024-07-05 16:31:13 +02:00  
				
					
						
							
							
								 
						
							
								c7212ac7cc 
								
							 
						 
						
							
							
								
								fix(LLM): mistral ignoring assistant messages ( #1954 )  
							
							... 
							
							
							
							* fix: mistral ignoring assistant messages
* fix: typing
* fix: fix tests 
							
						 
						
							2024-05-30 15:41:16 +02:00  
				
					
						
							
							
								 
						
							
								3b3e96ad6c 
								
							 
						 
						
							
							
								
								Allow parameterizing OpenAI embeddings component (api_base, key, model) ( #1920 )  
							
							... 
							
							
							
							* Allow parameterizing OpenAI embeddings component (api_base, key, model)
* Update settings
* Update description 
							
						 
						
							2024-05-17 09:52:50 +02:00  
				
					
						
							
							
								 
						
							
								45df99feb7 
								
							 
						 
						
							
							
								
								Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). ( #1858 )  
							
							... 
							
							
							
							feat(llm): Improve settings of the OpenAILike LLM 
							
						 
						
							2024-05-10 16:44:08 +02:00  
				
					
						
							
							
								 
						
							
								966af4771d 
								
							 
						 
						
							
							
								
								fix(settings): enable cors by default so it will work when using ts sdk (spa) ( #1925 )  
							
							
							
						 
						
							2024-05-10 14:13:46 +02:00  
				
					
						
							
							
								 
						
							
								d13029a046 
								
							 
						 
						
							
							
								
								feat(docs): add privategpt-ts sdk ( #1924 )  
							
							
							
						 
						
							2024-05-10 14:13:15 +02:00  
				
					
						
							
							
								 
						
							
								9d0d614706 
								
							 
						 
						
							
							
								
								fix: Replacing unsafe `eval()` with `json.loads()` ( #1890 )  
							
							
							
						 
						
							2024-04-30 09:58:19 +02:00  
				
					
						
							
							
								 
						
							
								e21bf20c10 
								
							 
						 
						
							
							
								
								feat: prompt_style applied to all LLMs + extra LLM params. ( #1835 )  
							
							... 
							
							
							
							* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.
* Removed prompt_style from llamacpp entirely
* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp. 
							
						 
						
							2024-04-30 09:53:10 +02:00  
				
					
						
							
							
								 
						
							
								c1802e7cf0 
								
							 
						 
						
							
							
								
								fix(docs): Update installation.mdx ( #1866 )  
							
							... 
							
							
							
							Update repo url 
							
						 
						
							2024-04-19 17:10:58 +02:00  
				
					
						
							
							
								 
						
							
								2a432bf9c5 
								
							 
						 
						
							
							
								
								fix: make embedding_api_base match api_base when on docker ( #1859 )  
							
							
							
						 
						
							2024-04-19 15:42:19 +02:00  
				
					
						
							
							
								 
						
							
								947e737f30 
								
							 
						 
						
							
							
								
								fix: "no such group" error in Dockerfile, added docx2txt and cryptography deps ( #1841 )  
							
							... 
							
							
							
							* Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers
* added cryptography dependency for pdf parsing 
							
						 
						
							2024-04-19 15:40:00 +02:00  
				
					
						
							
							
								 
						
							
								49ef729abc 
								
							 
						 
						
							
							
								
								Allow passing HF access token to download tokenizer. Fallback to default tokenizer.  
							
							
							
						 
						
							2024-04-19 15:38:25 +02:00  
				
					
						
							
							
								 
						
							
								347be643f7 
								
							 
						 
						
							
							
								
								fix(llm): special tokens and leading space ( #1831 )  
							
							
							
						 
						
							2024-04-04 14:37:29 +02:00  
				
					
						
							
							
								 
						
							
								08c4ab175e 
								
							 
						 
						
							
							
								
								Fix version in poetry  
							
							
							
						 
						
							2024-04-03 10:59:35 +02:00  
				
					
						
							
							
								 
						
							
								f469b4619d 
								
							 
						 
						
							
							
								
								Add required Ollama setting  
							
							
							
						 
						
							2024-04-02 18:27:57 +02:00  
				
					
						
							
							
								 
						
							
								94ef38cbba 
								
							 
						 
						
							
							
								
								chore(main): release 0.5.0 ( #1708 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-04-02 17:45:15 +02:00  
				
					
						
							
							
								 
						
							
								8a836e4651 
								
							 
						 
						
							
							
								
								feat(docs): Add guide Llama-CPP Linux AMD GPU support ( #1782 )  
							
							
							
						 
						
							2024-04-02 16:55:05 +02:00  
				
					
						
							
							
								 
						
							
								f0b174c097 
								
							 
						 
						
							
							
								
								feat(ui): Add Model Information to ChatInterface label  
							
							
							
						 
						
							2024-04-02 16:52:27 +02:00  
				
					
						
							
							
								 
						
							
								bac818add5 
								
							 
						 
						
							
							
								
								feat(code): improve concat of strings in ui ( #1785 )  
							
							
							
						 
						
							2024-04-02 16:42:40 +02:00  
				
					
						
							
							
								 
						
							
								ea153fb92f 
								
							 
						 
						
							
							
								
								feat(scripts): Wipe qdrant and obtain db Stats command ( #1783 )  
							
							
							
						 
						
							2024-04-02 16:41:42 +02:00  
				
					
						
							
							
								 
						
							
								b3b0140e24 
								
							 
						 
						
							
							
								
								feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings ( #1800 )  
							
							
							
						 
						
							2024-04-02 16:23:10 +02:00  
				
					
						
							
							
								 
						
							
								83adc12a8e 
								
							 
						 
						
							
							
								
								feat(RAG): Introduce SentenceTransformer Reranker ( #1810 )  
							
							
							
						 
						
							2024-04-02 10:29:51 +02:00  
				
					
						
							
							
								 
						
							
								f83abff8bc 
								
							 
						 
						
							
							
								
								feat(docker): set default Docker to use Ollama ( #1812 )  
							
							
							
						 
						
							2024-04-01 13:08:48 +02:00  
				
					
						
							
							
								 
						
							
								087cb0b7b7 
								
							 
						 
						
							
							
								
								feat(rag): expose similarity_top_k and similarity_score to settings ( #1771 )  
							
							... 
							
							
							
							* Added RAG settings to settings.py, vector_store and chat_service to add similarity_top_k and similarity_score
* Updated settings in vector and chat service per Ivans request
* Updated code for mypy 
							
						 
						
							2024-03-20 22:25:26 +01:00