The answer in the context carries reference markers and passes them to
the large model, which may include the markers in the new answer,
leading to abnormal reference points.
```
{'role': 'assistant', 'content': '设置在地下或半地下空间 ##0$$。'}
```

### What problem does this PR solve?
_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
### Type of change
- Bug Fix (non-breaking change which fixes an issue)
tags/v0.10.0
| @@ -168,7 +168,7 @@ def chat(dialog, messages, stream=True, **kwargs): | |||
| gen_conf = dialog.llm_setting | |||
| msg = [{"role": "system", "content": prompt_config["system"].format(**kwargs)}] | |||
| msg.extend([{"role": m["role"], "content": m["content"]} | |||
| msg.extend([{"role": m["role"], "content": re.sub(r"##\d+\$\$", "", m["content"])} | |||
| for m in messages if m["role"] != "system"]) | |||
| used_token_count, msg = message_fit_in(msg, int(max_tokens * 0.97)) | |||
| assert len(msg) >= 2, f"message_fit_in has bug: {msg}" | |||