类似地,在进行四次挥手时,第一次挥手需要客户端将 FIN 报文发送给服务器,第二次挥手需要服务器收到 FIN 报文后,发送 ACK 报文回复客户端,第三次挥手需要服务器也发送 FIN 报文给客户端,第四次挥手需要客户端收到服务器的 FIN 报文后,再发送 ACK 报文给服务器。因此,四次挥手需要 2 个 RTT 才能完成,其中 1 个 RTT 是客户端等待服务器的 FIN 报文响应时所需的时间。
2023-06-22 12:55:14 - error_code=context_length_exceeded error_message="This model's maximum context length is 8192 tokens. However, your messages resulted in 8206 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
2023-06-22 12:55:14 - This model's maximum context length is 8192 tokens. However, your messages resulted in 8206 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "azure_demo.py", line 24, in main
res = await cl.make_async(sync_func)()
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8206 tokens. Please reduce the length of the messages.
2023-06-22 12:55:31 - error_code=context_length_exceeded error_message="This model's maximum context length is 8192 tokens. However, your messages resulted in 8215 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
2023-06-22 12:55:31 - This model's maximum context length is 8192 tokens. However, your messages resulted in 8215 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "azure_demo.py", line 24, in main
res = await cl.make_async(sync_func)()
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8215 tokens. Please reduce the length of the messages.
2023-06-22 12:55:38 - error_code=context_length_exceeded error_message="This model's maximum context length is 8192 tokens. However, your messages resulted in 8224 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
2023-06-22 12:55:38 - This model's maximum context length is 8192 tokens. However, your messages resulted in 8224 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "azure_demo.py", line 24, in main
res = await cl.make_async(sync_func)()
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8224 tokens. Please reduce the length of the messages.
2023-06-22 12:55:49 - error_code=context_length_exceeded error_message="This model's maximum context length is 8192 tokens. However, your messages resulted in 8231 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
2023-06-22 12:55:49 - This model's maximum context length is 8192 tokens. However, your messages resulted in 8231 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "azure_demo.py", line 24, in main
res = await cl.make_async(sync_func)()
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8231 tokens. Please reduce the length of the messages.
2023-06-22 12:56:21 - error_code=context_length_exceeded error_message="This model's maximum context length is 8192 tokens. However, your messages resulted in 8246 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
2023-06-22 12:56:21 - This model's maximum context length is 8192 tokens. However, your messages resulted in 8246 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "azure_demo.py", line 24, in main
res = await cl.make_async(sync_func)()
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/Users/qinyingjie/miniconda3/envs/py36tf1/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8246 tokens. Please reduce the length of the messages.
2023-06-22 12:58:01 - WARNING! engine is not default parameter.
engine was transferred to model_kwargs.
Please confirm that engine is what you intended.
2023-06-22 12:58:01 - Your app is available at http://localhost:8000