7d2de5c96f 
								
							 
						 
						
							
							
								
								fix(ingest): update script label ( #1770 )  
							
							... 
							
							
							
							huggingface -> Hugging Face 
							
						 
						
							2024-03-20 20:23:08 +01:00  
				
					
						
							
							
								 
						
							
								348df781b5 
								
							 
						 
						
							
							
								
								feat(UI): Faster startup and document listing ( #1763 )  
							
							
							
						 
						
							2024-03-20 19:11:44 +01:00  
				
					
						
							
							
								 
						
							
								572518143a 
								
							 
						 
						
							
							
								
								feat(docs): Feature/upgrade docs ( #1741 )  
							
							... 
							
							
							
							* Upgrade fern version
* Add info about SDKs 
							
						 
						
							2024-03-19 21:26:53 +01:00  
				
					
						
							
							
								 
						
							
								134fc54d7d 
								
							 
						 
						
							
							
								
								feat(ingest): Created a faster ingestion mode - pipeline ( #1750 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres
* Adding pipeline ingestion mode
* disable hugging face parallelism.  Continue on file to doc transform failure
* Semaphore to limit docq async workers. ETA reporting 
							
						 
						
							2024-03-19 21:24:46 +01:00  
				
					
						
							
							
								 
						
							
								1efac6a3fe 
								
							 
						 
						
							
							
								
								feat(llm - embed): Add support for Azure OpenAI ( #1698 )  
							
							... 
							
							
							
							* Add support for Azure OpenAI
* fix: wrong default api_version
Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference 
* fix: code styling
applied "make check" changes
* refactor: extend documentation
* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file
* fix: documentation 
							
						 
						
							2024-03-15 16:49:50 +01:00  
				
					
						
							
							
								 
						
							
								258d02d87c 
								
							 
						 
						
							
							
								
								fix(docs): Minor documentation amendment ( #1739 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres 
							
						 
						
							2024-03-15 16:36:32 +01:00  
				
					
						
							
							
								 
						
							
								63de7e4930 
								
							 
						 
						
							
							
								
								feat: unify settings for vector and nodestore connections to PostgreSQL ( #1730 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres 
							
						 
						
							2024-03-15 09:55:17 +01:00  
				
					
						
							
							
								 
						
							
								68b3a34b03 
								
							 
						 
						
							
							
								
								feat(nodestore): add Postgres for the doc and index store ( #1706 )  
							
							... 
							
							
							
							* Adding Postgres for the doc and index store
* Adding documentation.  Rename postgres database local->simple.  Postgres storage dependencies
* Update documentation for postgres storage
* Renaming feature to nodestore
* update docstore -> nodestore in doc
* missed some docstore changes in doc
* Updated poetry.lock
* Formatting updates to pass ruff/black checks
* Correction to unreachable code!
* Format adjustment to pass black test
* Adjust extra inclusion name for vector pg
* extra dep change for pg vector
* storage-postgres -> storage-nodestore-postgres
* Hash change on poetry lock 
							
						 
						
							2024-03-14 17:12:33 +01:00  
				
					
						
							
							
								 
						
							
								d17c34e81a 
								
							 
						 
						
							
							
								
								fix(settings): set default tokenizer to avoid running make setup fail ( #1709 )  
							
							
							
						 
						
							2024-03-13 09:53:40 +01:00  
				
					
						
							
							
								 
						
							
								84ad16af80 
								
							 
						 
						
							
							
								
								feat(docs): upgrade fern ( #1596 )  
							
							
							
						 
						
							2024-03-11 23:02:56 +01:00  
				
					
						
							
							
								 
						
							
								821bca32e9 
								
							 
						 
						
							
							
								
								feat(local): tiktoken cache within repo for offline ( #1467 )  
							
							
							
						 
						
							2024-03-11 22:55:13 +01:00  
				
					
						
							
							
								 
						
							
								02dc83e8e9 
								
							 
						 
						
							
							
								
								feat(llm): adds serveral settings for llamacpp and ollama ( #1703 )  
							
							
							
						 
						
							2024-03-11 22:51:05 +01:00  
				
					
						
							
							
								 
						
							
								410bf7a71f 
								
							 
						 
						
							
							
								
								feat(ui): maintain score order when curating sources ( #1643 )  
							
							... 
							
							
							
							* Update ui.py
Changed 'curated_sources' from a list, in order to maintain score order when returning the curated sources.
* Maintain score order after curating sources 
							
						 
						
							2024-03-11 22:27:30 +01:00  
				
					
						
							
							
								 
						
							
								290b9fb084 
								
							 
						 
						
							
							
								
								feat(ui): add sources check to not repeat identical sources ( #1705 )  
							
							
							
						 
						
							2024-03-11 22:24:18 +01:00  
				
					
						
							
							
								 
						
							
								1b03b369c0 
								
							 
						 
						
							
							
								
								chore(main): release 0.4.0 ( #1628 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-03-06 17:53:35 +01:00  
				
					
						
							
							
								 
						
							
								45f05711eb 
								
							 
						 
						
							
							
								
								feat: Upgrade to LlamaIndex to 0.10 ( #1663 )  
							
							... 
							
							
							
							* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters 
							
						 
						
							2024-03-06 17:51:30 +01:00  
				
					
						
							
							
								 
						
							
								12f3a39e8a 
								
							 
						 
						
							
							
								
								Update x handle to zylon private gpt ( #1644 )  
							
							
							
						 
						
							2024-02-23 15:51:35 +01:00  
				
					
						
							
							
								 
						
							
								cd40e3982b 
								
							 
						 
						
							
							
								
								feat(Vector): support pgvector ( #1624 )  
							
							
							
						 
						
							2024-02-20 15:29:26 +01:00  
				
					
						
							
							
								 
						
							
								066ea5bf28 
								
							 
						 
						
							
							
								
								chore(main): release 0.3.0 ( #1413 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-02-16 17:42:39 +01:00  
				
					
						
							
							
								 
						
							
								aa13afde07 
								
							 
						 
						
							
							
								
								feat(UI): Select file to Query or Delete + Delete ALL ( #1612 )  
							
							... 
							
							
							
							---------
Co-authored-by: Robin Boone <rboone@sofics.com> 
							
						 
						
							2024-02-16 17:36:09 +01:00  
				
					
						
							
							
								 
						
							
								24fb80ca38 
								
							 
						 
						
							
							
								
								fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked.  
							
							... 
							
							
							
							Updated ui.py to include a small sleep timer while building the stream deltas.  This recursive function fires off so quickly to eats up too much of the CPU.  This small sleep frees up the CPU to not be bottlenecked.  This value can go lower/shorter.  But 0.02 or 0.025 seems to work well. (#1589 )
Co-authored-by: root <root@wesgitlabdemo.icl.gtri.org> 
							
						 
						
							2024-02-16 12:52:14 +01:00  
				
					
						
							
							
								 
						
							
								6bbec79583 
								
							 
						 
						
							
							
								
								feat(llm): Add support for Ollama LLM ( #1526 )  
							
							
							
						 
						
							2024-02-09 15:50:50 +01:00  
				
					
						
							
							
								 
						
							
								b178b51451 
								
							 
						 
						
							
							
								
								feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion ( #1432 )  
							
							
							
						 
						
							2024-02-07 19:59:32 +01:00  
				
					
						
							
							
								 
						
							
								24fae660e6 
								
							 
						 
						
							
							
								
								feat: Add stream information to generate SDKs ( #1569 )  
							
							
							
						 
						
							2024-02-02 16:14:22 +01:00  
				
					
						
							
							
								 
						
							
								3e67e21d38 
								
							 
						 
						
							
							
								
								Add embedding mode config ( #1541 )  
							
							
							
						 
						
							2024-01-25 10:55:32 +01:00  
				
					
						
							
							
								 
						
							
								869233f0e4 
								
							 
						 
						
							
							
								
								fix: Adding an LLM param to fix broken generator from llamacpp ( #1519 )  
							
							
							
						 
						
							2024-01-17 18:10:45 +01:00  
				
					
						
							
							
								 
						
							
								e326126d0d 
								
							 
						 
						
							
							
								
								feat: add mistral + chatml prompts ( #1426 )  
							
							
							
						 
						
							2024-01-16 22:51:14 +01:00  
				
					
						
							
							
								 
						
							
								6191bcdbd6 
								
							 
						 
						
							
							
								
								fix: minor bug in chat stream output - python error being serialized ( #1449 )  
							
							
							
						 
						
							2024-01-16 16:41:20 +01:00  
				
					
						
							
							
								 
						
							
								d3acd85fe3 
								
							 
						 
						
							
							
								
								fix(tests): load the test settings only when running tests  
							
							... 
							
							
							
							Previous implementation causes false positives with the last version of LlamaIndex 
							
						 
						
							2024-01-09 12:03:16 +01:00  
				
					
						
							
							
								 
						
							
								0a89d76cc5 
								
							 
						 
						
							
							
								
								fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0  
							
							
							
						 
						
							2023-12-26 13:09:31 +01:00  
				
					
						
							
							
								 
						
							
								2d27a9f956 
								
							 
						 
						
							
							
								
								feat(llm): Add openailike llm mode ( #1447 )  
							
							... 
							
							
							
							This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.
Implements #1424  
							
						 
						
							2023-12-26 10:26:08 +01:00  
				
					
						
							
							
								 
						
							
								fee9f08ef3 
								
							 
						 
						
							
							
								
								Move back to 3900 for the context window to avoid melting local machines  
							
							
							
						 
						
							2023-12-22 18:21:43 +01:00  
				
					
						
							
							
								 
						
							
								fde2b942bc 
								
							 
						 
						
							
							
								
								fix(deploy): fix local and external dockerfiles  
							
							
							
						 
						
							2023-12-22 14:16:46 +01:00  
				
					
						
							
							
								 
						
							
								4c69c458ab 
								
							 
						 
						
							
							
								
								Improve ingest logs ( #1438 )  
							
							
							
						 
						
							2023-12-21 17:13:46 +01:00  
				
					
						
							
							
								 
						
							
								4780540870 
								
							 
						 
						
							
							
								
								feat(settings): Configurable context_window and tokenizer ( #1437 )  
							
							
							
						 
						
							2023-12-21 14:49:35 +01:00  
				
					
						
							
							
								 
						
							
								6eeb95ec7f 
								
							 
						 
						
							
							
								
								feat(API): Ingest plain text ( #1417 )  
							
							... 
							
							
							
							* Add ingest/text route to ingest plain text
* Add new ingest text test and adapt ingest/file ones
* Include new API in docs
* Remove duplicated logic 
							
						 
						
							2023-12-18 21:47:05 +01:00  
				
					
						
							
							
								 
						
							
								059f35840a 
								
							 
						 
						
							
							
								
								fix(docker): docker broken copy ( #1419 )  
							
							
							
						 
						
							2023-12-18 16:55:18 +01:00  
				
					
						
							
							
								 
						
							
								8ec7cf49f4 
								
							 
						 
						
							
							
								
								feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF ( #1415 )  
							
							... 
							
							
							
							* Update LlamaCPP dependency
* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF
* Fix API docs 
							
						 
						
							2023-12-17 16:11:08 +01:00  
				
					
						
							
							
								 
						
							
								c71ae7cee9 
								
							 
						 
						
							
							
								
								feat(ui): make chat area stretch to fill the screen ( #1397 )  
							
							
							
						 
						
							2023-12-17 12:02:13 +01:00  
				
					
						
							
							
								 
						
							
								2564f8d2bb 
								
							 
						 
						
							
							
								
								fix(settings): correct yaml multiline string ( #1403 )  
							
							
							
						 
						
							2023-12-16 19:02:46 +01:00  
				
					
						
							
							
								 
						
							
								4e496e970a 
								
							 
						 
						
							
							
								
								docs: remove misleading comment about pgpt working with python 3.12 ( #1394 )  
							
							... 
							
							
							
							I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄  
							
						 
						
							2023-12-15 21:35:02 +01:00  
				
					
						
							
							
								 
						
							
								3582764801 
								
							 
						 
						
							
							
								
								ci: fix preview docs checkout ref ( #1393 )  
							
							
							
						 
						
							2023-12-12 20:33:34 +01:00  
				
					
						
							
							
								 
						
							
								1d28ae2915 
								
							 
						 
						
							
							
								
								docs: fix minor capitalization typo ( #1392 )  
							
							
							
						 
						
							2023-12-12 20:31:38 +01:00  
				
					
						
							
							
								 
						
							
								e8ac51bba4 
								
							 
						 
						
							
							
								
								chore(main): release 0.2.0 ( #1387 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2023-12-10 20:08:12 +01:00  
				
					
						
							
							
								 
						
							
								145f3ec9f4 
								
							 
						 
						
							
							
								
								feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface ( #1353 )  
							
							
							
						 
						
							2023-12-10 19:45:14 +01:00  
				
					
						
							
							
								 
						
							
								a072a40a7c 
								
							 
						 
						
							
							
								
								Allow setting OpenAI model in settings ( #1386 )  
							
							... 
							
							
							
							feat(settings): Allow setting openai model to be used. Default to GPT 3.5 
							
						 
						
							2023-12-09 20:13:00 +01:00  
				
					
						
							
							
								 
						
							
								a3ed14c58f 
								
							 
						 
						
							
							
								
								feat(llm): drop default_system_prompt ( #1385 )  
							
							... 
							
							
							
							As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.
A concurrent PR (#1353 ) is enabling the dynamic setting of a system prompt in the UI.
Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.
In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist. 
							
						 
						
							2023-12-08 23:13:51 +01:00  
				
					
						
							
							
								 
						
							
								f235c50be9 
								
							 
						 
						
							
							
								
								Delete old docs ( #1384 )  
							
							
							
						 
						
							2023-12-08 22:39:23 +01:00  
				
					
						
							
							
								 
						
							
								9302620eac 
								
							 
						 
						
							
							
								
								Adding german speaking model to documentation ( #1374 )  
							
							
							
						 
						
							2023-12-08 11:26:25 +01:00  
				
					
						
							
							
								 
						
							
								9cf972563e 
								
							 
						 
						
							
							
								
								Add setup option to Makefile ( #1368 )  
							
							
							
						 
						
							2023-12-08 10:34:12 +01:00