# Memory settings

#### What is Memory, and How Does It Work? How Do I Adjust Memory Length?

**Memory** refers to the content the AI model retains during a conversation. We recommend using the **default memory length** for optimal performance and cost-efficiency.<br>

In **Advanced Memory Mode**, memory length determines how many dialogue rounds the AI will remember (one round = your message + AI's reply).&#x20;

**Higher turn counts mean:**<br>

* More context is fed to the AI.
* Higher token usage (text volume), which increases **Ruby** costs.

{% hint style="info" %}
Current AI technology still has limitations. Even with full memory input, the AI may occasionally "forget" details.
{% endhint %}

Click the **+ icon** on the left side of the chat window.

<figure><img src="https://887975652-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlSTApa0gGqZr1iiroILY%2Fuploads%2FxIHCRhGAJhAmYUreHyVM%2Fimage.png?alt=media&#x26;token=2e628311-75e9-4d9b-adc7-6487c025617a" alt=""><figcaption></figcaption></figure>

#### Select the memory length you preferred, or use the "Advanced Memory Mode" if needed.

<figure><img src="https://887975652-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlSTApa0gGqZr1iiroILY%2Fuploads%2F8hMl0gdpO2022PsiRxhl%2Fimage.png?alt=media&#x26;token=c1bf7016-c032-4624-a229-57b9cff8767b" alt=""><figcaption></figcaption></figure>

#### What’s the Purpose of Advanced Memory Mode?

We designed **Advanced Memory Mode** to let users have full controll over their chat experience. You can **balance three key factors**:<br>

* **Memory retention** (how much context the AI retains),
* **Chat costs** (higher memory = more tokens/Ruby usage),
* **Model performance** (different AI models have *varying memory capacities*, all of which are inherently limited).

This flexibility lets you prioritize what matters most: richer context, cost efficiency, or leveraging a model’s specific strengths.

#### 3. Pro Tips to Save Rubies&#x20;

**Method 1: Reset Chats (Keep Context, Lower Costs)**

1. **Make a Summary**: Paste this exact command into your chat:\
   `(Execute command: Summarize all key events and memories <plot> to date)`

→ The AI will generate a compact recap inside `<plot>` tags.

2. **Migrate to a New Chat**:

   * Click **Edit** on the summary, copy the text.
   * Start a **new chat**, paste it under the greeting like this:

     <pre class="language-html"><code class="lang-html">&#x3C;plot>  
     【Previously】  
     <strong>[Paste summary here]  
     </strong>&#x3C;/plot>  
     </code></pre>

   then you can continute to chat with the AI in a new chat.

→ Fresh chat = lower token count, same narrative flow!

**Method 2: Pin Critical Plot Points**

* **You can add** pivotal details to your Persona (e.g., "The protagonist is allergic to roses"). This acts as a "cheat sheet" for the AI, hence reducing reliance on memory.

***

### FAQ

#### 1. If I’ve only chatted for 2 rounds but set memory to 100 rounds, will I be charged extra?

**A:** No. Costs are calculated based on the **actual word count processed by the AI**, not your memory length setting.

***

#### 2. What happens if my chat exceeds 200 rounds?

**A:** Our **context search model** ensures relevance, even with limited memory settings. For example:

* If you chat 200+ rounds but only set memory to 30, the AI will prioritize retrieving *critical early context* (e.g., family details from the first few rounds) over less relevant later exchanges.
* The AI still recalls key details without needing ultra-high memory limits.

***

#### 3. Why did my Ruby usage suddenly spike?

**A:** Increasing memory length when in a long conversation will *dramatically increase the input tokens* and the *price*. Example:

* Starting at 8 rounds → raising to 20 rounds forces the AI to process **20 rounds’ worth of text** instead of 8.
* More tokens = higher Ruby costs. Adjust the slider cautiously during long conversations!
