File size: 22,401 Bytes
b070b19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21d4f44
b070b19
5766571
3c0d822
b070b19
 
21d4f44
b070b19
5766571
b070b19
 
 
 
5766571
b070b19
 
 
 
 
 
21d4f44
b070b19
 
 
 
 
 
 
 
 
 
21d4f44
5766571
b070b19
 
 
 
21d4f44
 
 
b070b19
 
 
 
 
 
 
 
aa19daf
b070b19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6cc2eb4
 
 
 
 
 
 
 
 
b070b19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6cc2eb4
b070b19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6cc2eb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b070b19
 
6cc2eb4
b070b19
6cc2eb4
b070b19
6cc2eb4
b070b19
6cc2eb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b070b19
 
 
 
 
 
 
 
 
 
 
 
 
 
6cc2eb4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
---
license: mit
language:
- en
task_categories:
- question-answering
tags:
  - llm
  - memory
  - retrieval
  - context interference
  - long-context

configs:      
  - config_name: core
    description: Randomized updates (keys shuffled across key–value pairs). Recommended as the primary/SOTA comparison setting. At the highest stress tier, all tested models (as of May 2025) fail to reliably recover the final value.
    data_files:
      - split: test
        path: core.parquet
  
  - config_name: sequential_additional
    description: Non-randomized  clear and strict sequential blocks; prove short context(token=5k-8k) can already have a strong context interference for most LLMs. Even with this well formatted data, many model's the performance still drop rapidly.
    data_files:
      - split: test
        path: sequential_additional.parquet

    

---
---
**Super easy task for humans**  that **All SOTA LLM fail** to retrieve the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc 
- ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
- Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
> **Update:Sept.6-mergerd into Moonshot/Kimi AI's internal eval tools and under review by a xAI(Grok)'s' eval team**


## Key–value update paradigm (what the model sees) 1 keys, N updates each

```
Key1: Value_1
Key1: Value_2
......
Key1: Value_N
```

Question: 
```
What is the current value (the last value) for Key1?
```

Expected Answer: 
```
The current value of Key1 is Value_N. 
```


## Results:
ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**. 



## The End/Full details and the cognitive science bases listed below.
For Full analysis see below or the paper. In short, the **multi-coreferenced structure** of data make all LLMs confuse earlier value with the last, **Larger models tend to resist better**, yet **within (N)100 updates**, **eventually all LLMs fail to retrieve**. 

## Note on dataset scale: 
(N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.

## Impact

This mini dataset's structure is **Common in many data processing tasks**. Finance/health/cursor position...etc. For example, consider a sequence of blood-pressure (BP) readings, where the task is to keep track of the most recent BP value. BP: 120 – triage; BP: 128 – 10 min later; BP: 125 – discharge. --**What is patient's Last/Current BP ?**








## Paper Tile: Unable to Forget: Proactive Interference Reveals Working Memory Limits in LLMs Beyond Context Length
- ICML 2025 Long-Context Foundation Models Workshop Accepted.
A simple context interference evaluation.


## TL;DR
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**


-Multi-round co-reference in Context Interference:

Classic long-context benchmarks often test retrieving a single "needle" from a massive "haystack." MRCR raises the bar by placing many similar needles in the same context, requiring models to select the correct item (up to 8 needles), and shows that all LLMs struggle with this task.

- PI-LLM paper: https://arxiv.org/abs/2506.08184
- OpenAI MRCR dataset: https://huggingface.co/datasets/openai/mrcr
- DeepMind MRCR (Gemini) paper: https://arxiv.org/pdf/2409.12640v2



## Our test takes this one step further
If MRCR is "multiple needles in a haystack", we show the **haystack isn't necessary** to expose core retrieval failures. By isolating—and precisely controlling—the number of similar, co-referenced items (we repeatedly update the value of the same keys in key–value pairs), our paradigm directly measures how interference from up to 400 needles limits retrieval accuracy even without any "haystack" as background. LLMs cannot perform a simple task like "retrieving the last value" of each co-referenced item.

- We observe a clear log-linear decline in accuracy as the number of interfering updates grows (i.e., co-references increase).
- The effect holds across the transformer models we tested. See our paper for details and methodology.

- Our demo site: https://sites.google.com/view/cog4llm
- Our paper (ICML2025  Long-Context Workshop): https://arxiv.org/abs/2506.08184
- Mechanistic research is ongoing. The test is well-established in cognitive science, where it has been studied extensively to measure human **Working Memory capacity**.




## Key–value update paradigm (what the model sees)
We present a classical key–value experiment: the same key is updated multiple times. The model is then asked to return the current (last) value for each key. This isolates co-reference interference without requiring extremely long distractor contexts.

Minimal example (1 keys, N updates each):
```

Key1: Value_1
Key1: Value_2
......
Key1: Value_N



Question: 

What is the current value (the last value) for Key1?
```

Expected: 
```
The current value of Key1 is Value_N. 
```


## Results:
ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**. 



## Note on dataset scale: 
(N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.



## Why this is challenging for LLMs:
- Multiple co-references to the same key cause strong interference.

1. As the number of updates per key (N) increases, LLMs **confuse earlier values** with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**. 


## On Randomization
We **RANDOMIZE**  update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**



## Cognitive science connection: Proactive Interference (PI)
Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.

- Interestingly, humans are **also affected by these three dimensions**, but far less than LLMs. Humans consistently outperform even the latest and largest models on this task.”

See: https://sites.google.com/view/cog4llm 

## SAME Log-linear Decline of Accuracy for ALL SOTA LLMs tested(2019-2025)
- Humans: near-ceiling accuracy (99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).


## Full detail of 3 tests
This dataset consists of 2 additional dimensions of evaluation to show current LLMs' limits. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc 

- Experiment2. (Dataset column: exp_keys).
LLMs's capacity to resist interference and their accuracy to retrieve the last value decrease log-linearly as the number of concurrent keys(n_keys) grows. 
This experiment fixes everything else and vary only n_keys. (Two sets of test are provided, one fix update to 350 and another fixed update to 125 as lower difficulty settings)


- Experiment3. (Dataset column: exp_valuelength).— This causes rapid decline across LLMs (GPT-5 and Grok-4 decline similarly to GPT-2).”
Retrieval accuracy also decreases log-linearly as value length grows. 
This experiment fixes everything else, and vary only the value_length.
Two sets of tests are provided, one fix update to 20 and another fixed update per key to only 4 as low- difficulty settings

(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)

## One more things: Sequential / Non-Randomized Mode (Last but interesting)
In a separated dataset files (Dataset column: extra_exp_updates_randomoff)
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
- This separate dataset consists of 46 of following blocks in a non-randomized order:



Key1: Value_1
Key1: Value_2
......
Key1: Value_N


Key2: Value_1
Key2: Value_2
......
Key2: Value_N

....all the way to key46 block

Question: 

What is the current value (the last value) for key1 key2....key46?


**Result**  
- In this mode, **most Modern LLMs (all <600B) still confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).  
- Models quickly confuse earlier values with the most recent one.  
- This is the **original and most simple test**
- Performance for this mode is also **reported in our paper (Figure 4).**
- **Step-like failure pattern** in this sequential key–value update tests. Retrieval accuracy remains near-perfect as interfering information is added in strictly sequential order, until a model-specific threshold is reached—after which **performance drops rapidly to near-zero**.


# PI-LLM Dataset File List

This repository hosts the **PI-LLM** dataset.  
Currently it includes two files:

- **core.parquet** → Main dataset (randomized updates). Recommended as the primary/SOTA comparison setting; All tested models fail to reliably retrieve the last value. 
- **sequential_additional.parquet** →  Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).


## Quick Start - Evaluate Your Model

```python
from huggingface_hub import hf_hub_download
import pandas as pd
from openai import OpenAI
import json
import tiktoken

# Set accordingly  
MAX_CONTEXT_WINDOW = 1000000
MODEL = ""  # or your preferred model

# Download the dataset
dataset = pd.read_parquet(
    hf_hub_download(repo_id="giantfish-fly/pi-llm", filename="core.parquet", repo_type="dataset")
)

client = OpenAI()
enc = tiktoken.get_encoding("o200k_base")

def extract_pieces_response_to_dict(model_output, probe_target="current"):
    """
    Extract the dictionary of key-value pairs from the model output.
    First extract using verbal language match, then using colon match.
    Merge the two dictionaries, prioritizing keys from the verbal match.
    """
    import re
    
    if len(model_output) == 0:
        return None
    
    if "error code" in model_output.lower():
        return None
    
    if model_output.startswith("error") or model_output.startswith("Error"):
        return None

    if (re.search(r'\berror\b', model_output, re.IGNORECASE)) and (len(model_output) < 680):
        return None

    # Remove backslashes and asterisks
    model_output = re.sub(r'\\(?!n)', '', model_output)
    model_output = re.sub(r'\*', '', model_output)

    dict_verbal_match = _extract_verbal_matches(model_output, probe_target)
    dict_colon_match = _extract_colon_matches(model_output)

    dict_merged = dict_colon_match.copy()
    dict_merged.update(dict_verbal_match)
    dict_merged.pop("key", None)

    return dict_merged

def _extract_verbal_matches(model_output, probe_target="current"):
    """Extract key-value pairs using verbal patterns like 'The current value of X is Y'"""
    import re
    
    patterns = [
        r"(?:the)?\s*(?:most recent|final|last|latest|current|up-to-date|asked|queried|specified)\s+(?:value|word|term)?(?:s)?(?:\s+\w+){0,1}\s+(?:with|for|of|to)?\s+(?:the )?(?:category|key)?\s*([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)\s+(?:is|was)(?:\s*:\s*)?\s+([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)(?=\n|[,.;:]|$)",
    ]
    
    dict_response = {}
    for pattern in patterns:
        matches = re.findall(pattern, model_output, re.IGNORECASE | re.DOTALL)
        for match in matches:
            if len(match) >= 2:
                key, value = match[0], match[1]
                key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', key).strip()
                value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', value).strip()
                if key and value:
                    dict_response[key] = value
    return dict_response

def _extract_colon_matches(model_output):
    """Extract key-value pairs using colon-separated patterns"""
    import re
    
    # Simple colon-based extraction
    dict_response = {}
    lines = model_output.split('\n')
    for line in lines:
        if ':' in line:
            parts = line.split(':', 1)
            if len(parts) == 2:
                key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[0]).strip()
                value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[1]).strip()
                if key and value:
                    dict_response[key] = value
    return dict_response

def grade_pi_response(response, answer_formatted):
    """
    Compute per-row accuracy for PI-LLM: fraction of tracked keys answered with the last value.
    - Parses the ground truth JSON string (answer_formatted) into {key: last_value}.
    - Parses model output into {key: value} using robust extractors.
    - Returns (# of keys with exact value match) / (# of keys in ground truth).
    """
    try:
        # Parse ground truth JSON
        ground_truth = json.loads(answer_formatted)
        
        # Extract key-value pairs from model response using parsing functions
        response_dict = extract_pieces_response_to_dict(response, probe_target="current")
        if not isinstance(ground_truth, dict) or ground_truth is None:
            return 0.0
        if not isinstance(response_dict, dict) or response_dict is None:
            return 0.0
        
        keys = list(ground_truth.keys())
        if len(keys) == 0:
            return 0.0
        correct = sum(1 for k in keys if response_dict.get(k) == ground_truth.get(k))
        return correct / len(keys)
    except Exception as e:
        return 0.0

def n_tokens(messages):
    """Count tokens in messages."""
    return sum([len(enc.encode(m["content"])) for m in messages])

# Evaluate your model (Recommnd Using below AUC/weighted score )
results = []
for index, row in dataset.iterrows():
    messages = json.loads(row["prompt"])
    if n_tokens(messages) > MAX_CONTEXT_WINDOW:
        continue
        
    completion = client.chat.completions.create(
        model=MODEL,
        messages=messages,
    )
    response = completion.choices[0].message.content
    accuracy = grade_pi_response(response, row["answer_formatted"])
    parsed = extract_pieces_response_to_dict(response, probe_target="current")
    
    # Store result with experiment info and raw/parsed responses (useful for axes + error analysis)
    results.append({
        'experiment': row['experiment'],
        'session_id': row['session_id'],
        'run_id': row.get('run_id', None),
        'accuracy': accuracy,
        'index': index,
        'response_text': response,
        'parsed_response': parsed,
    })
    
    print(f"Row {index} ({row['experiment']}, session {row['session_id']}): {accuracy}")

# Calculate accuracy by experiment
import pandas as pd
results_df = pd.DataFrame(results)

# Group by experiment and calculate mean accuracy
experiment_accuracy = results_df.groupby('experiment')['accuracy'].agg(['mean', 'count']).reset_index()
experiment_accuracy['accuracy_percent'] = experiment_accuracy['mean'] * 100

print("\n=== Accuracy by Experiment ===")
for _, row in experiment_accuracy.iterrows():
    print(f"{row['experiment']}: {row['accuracy_percent']:.1f}% ({row['count']} samples)")

# Average across runs (e.g., 10 sessions via run_id)
if 'run_id' in results_df.columns:
    # Mean accuracy per experiment per run, then average across runs
    per_run = results_df.groupby(['experiment', 'run_id'])['accuracy'].mean().reset_index()
    exp_avg = per_run.groupby('experiment')['accuracy'].mean().reset_index()
    exp_avg['accuracy_percent'] = 100 * exp_avg['accuracy']
    print("\n=== Experiment accuracy averaged across runs (run_id) ===")
    for _, r in exp_avg.iterrows():
        print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
```

## 🏆 Advanced Evaluation with AUC Scoring (Highly Recommand)
```python


### Why AUC Scoring?
- **Average accuracy** treats all tasks equally → poor model differentiation
- **AUC (log base 1.5)** weighs harder tasks more → better high-end model ranking
- **Essential for research** comparing SOTA models on difficult ranges

### Complete Evaluation Function


import math

def compute_pi_auc_score(results, log_base=1.5):
    """
    PI-LLM AUC score (PRIMARY: 'auc_log1.5'), using log_base(n_updates) weights.
    - For two-mode experiments (keys/value length), also returns easy/hard AUCs.
    - For others (updates/sequential), returns a single overall AUC.
    """
    if not results:
        return {'avg_accuracy': 0.0, 'auc_log1.5': 0.0, 'total_samples': 0}

    def wmean(samples):
        # weight = log_base(max(n_updates, 2)) to reflect difficulty
        ws = [math.log(max(s.get('n_updates', 2), 2), log_base) for s in samples]
        denom = sum(ws)
        return (sum(s['accuracy'] * w for s, w in zip(samples, ws)) / denom) if denom else 0.0

    exp = results[0].get('experiment', '')
    avg = sum(s['accuracy'] for s in results) / len(results)
    overall = wmean(results)

    # Two-mode thresholds
    if 'exp_keys' in exp:
        easy_thr, hard_thr = 125, 350
    elif 'exp_valuelength' in exp:
        easy_thr, hard_thr = 4, 20
    else:
        # Single-mode path
        return {'avg_accuracy': avg, 'auc_log1.5': overall, 'total_samples': len(results)}

    easy = [s for s in results if s.get('n_updates', 0) <= easy_thr]
    hard = [s for s in results if s.get('n_updates', 0) >= hard_thr]

    return {
        'avg_accuracy': avg,
        'auc_log1.5': overall,                 # PRIMARY metric
        'auc_log1.5_easy': wmean(easy) if easy else 0.0,
        'auc_log1.5_hard': wmean(hard) if hard else 0.0,
        'total_samples': len(results),
    }


### Usage Example


from datasets import load_dataset

# Load PI-LLM dataset
dataset = load_dataset("giantfish-fly/pi-llm", "core")['test']

# Run your model and collect results
results = []
for sample in dataset:
    pred = your_model(sample['prompt'])  # Your model inference
    accuracy = grade_pi_response(pred, sample['answer_formatted'])
    results.append({
        'accuracy': accuracy,
        'n_updates': sample['n_updates'], 
        'experiment': sample['experiment']
    })

# Compute AUC scores
scores = compute_pi_auc_score(results)

# Display results (format varies by experiment)
print(f"🏆 AUC Score: {scores['auc_log1.5']:.3f}")  # PRIMARY metric
if 'auc_log1.5_easy' in scores:
    print(f"📊 Easy Mode: {scores['auc_log1.5_easy']:.3f}")
    print(f"📊 Hard Mode: {scores['auc_log1.5_hard']:.3f}")


### Output Formats

**Single-Mode Experiments** (`exp_updates`, `exp_sequential`):

{'avg_accuracy': 0.600, 'auc_log1.5': 0.412, 'total_samples': 100}


**Two-Mode Experiments** (`exp_keys`, `exp_valuelength`):  

{
    'avg_accuracy': 0.600, 'auc_log1.5': 0.576,     # Overall metrics
    'auc_log1.5_easy': 0.850, 'auc_log1.5_hard': 0.350,  # Mode breakdown
    'total_samples': 150
}


### 🎯 For Model Ranking: Use `auc_log1.5` as your primary metric!

### ✅ Finally, Total Score (Macro PI-AUC1.5)

**Definition:** average of each test’s `auc_log1.5` (simple, clear leaderboard number).

def compute_total_pi_auc(all_tests, log_base=1.5):
    """
    Total PI-AUC1.5 across tests = average of per-test auc_log1.5.
    all_tests: dict {test_name -> list[results]} where each `results` list
               is what you'd pass to compute_pi_auc_score(...).
    """
    if not all_tests:
        return {"per_test_auc_log1.5": {}, "total_auc_log1.5": 0.0}

    per_test = {
        name: compute_pi_auc_score(rs, log_base)["auc_log1.5"]
        for name, rs in all_tests.items() if rs
    }
    total = sum(per_test.values()) / len(per_test) if per_test else 0.0
    return {"per_test_auc_log1.5": per_test, "total_auc_log1.5": total}

```


```
## References
- 
- PI-LLM demo site: https://sites.google.com/view/cog4llm
- PI-LLM paper: https://arxiv.org/abs/2506.08184

@misc{wang2025unableforgetproactiveinterference,
      title={Unable to Forget: Proactive Interference Reveals Working Memory Limits in LLMs Beyond Context Length}, 
      author={Chupei Wang and Jiaqiu Vince Sun},
      year={2025},
      eprint={2506.08184},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.08184}, 
}
```