Fix vanilla model answer in example benchmark (#219)
This commit is contained in:
parent
72b01a9909
commit
2a69f1574e
|
@ -253,7 +253,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
" if is_vanilla_llm:\n",
|
" if is_vanilla_llm:\n",
|
||||||
" llm = agent\n",
|
" llm = agent\n",
|
||||||
" answer = str(llm([{\"role\": \"user\", \"content\": question}]))\n",
|
" answer = str(llm([{\"role\": \"user\", \"content\": question}]).content)\n",
|
||||||
" token_count = {\"input\": llm.last_input_token_count, \"output\": llm.last_output_token_count}\n",
|
" token_count = {\"input\": llm.last_input_token_count, \"output\": llm.last_output_token_count}\n",
|
||||||
" intermediate_steps = str([])\n",
|
" intermediate_steps = str([])\n",
|
||||||
" else:\n",
|
" else:\n",
|
||||||
|
|
Loading…
Reference in New Issue