Stick To Your Role! Leaderboard

LLMs can role-play different personas by simulating their values and behavior, but can they stick to their role whatever the context? Is simulated Joan of Arc more tradition-driven than Elvis? Will it still be the case after playing chess?

The Stick to Your Role! leaderboard compares LLMs based on undesired sensitivity to context change. LLM-exhibited behavior always depends on the context (prompt). While some context-dependence is desired (e.g. following instructions), some is undesired (e.g. drastically changing the simulated value expression based on the interlocutor). As proposed in our paper, undesired context-dependence should be seen as a property of LLMs - a dimension of LLM comparison (alongside others such as model size speed or expressed knowledge). This leaderboard aims to provide such a comparison and extends our paper with a more focused and elaborate experimental setup. Standard benchmarks present many questions from the same minimal contexts (e.g. multiple choice questions), we present same questions from many different contexts.

The Stick to Your Role! leaderboard focuses on the stability of simulated personal values during role-playing. We study the coherence of a simulated population. In contrast to evaluating each simulated persona separately, we evaluate personas relative to each other, i.e. as a population. You can browse the simulated population, questionnaires, and contexts used on our 🤗 StickToYourRole dataset.

# Model Ordinal - Win rate (↑) Cardinal - Score (↑) RO Stability (↑)
Ministrations-8B-v1 0.595 0.563 0.506
Cydonia-22B-v1.2 0.707 0.655 0.619
Nautilus-70B-v0.1 0.783 0.707 0.633
Ministral-8B-Instruct-2410 0.547 0.520 0.412
llama-3.1-nemotron-70B-instruct 0.871 0.752 0.717
hermes_3_llama_3.1_70b 0.535 0.480 0.259
hermes_3_llama_3.1_8b 0.422 0.412 0.165
gemma-2-2b-it 0.330 0.331 0.147
gemma-2-9b-it 0.708 0.602 0.438
gemma-2-27b-it 0.598 0.527 0.392
phi-3-mini-128k-instruct 0.301 0.330 0.039
phi-3-medium-128k-instruct 0.298 0.308 0.097
phi-3.5-mini-instruct 0.220 0.268 0.036
phi-3.5-MoE-instruct 0.358 0.361 0.110
Mistral-7B-Instruct-v0.1 0.198 0.266 0.027
Mistral-7B-Instruct-v0.2 0.315 0.321 0.144
Mistral-7B-Instruct-v0.3 0.234 0.266 0.080
Mixtral-8x7B-Instruct-v0.1 0.397 0.382 0.215
Mixtral-8x22B-Instruct-v0.1 0.311 0.315 0.141
command_r_plus 0.549 0.500 0.343
llama_3_8b_instruct 0.457 0.430 0.245
llama_3_70b_instruct 0.759 0.684 0.607
llama_3.1_8b_instruct 0.537 0.479 0.430
llama_3.1_70b_instruct 0.806 0.717 0.691
llama_3.1_405b_instruct_4bit 0.727 0.649 0.723
llama_3.2_1b_instruct 0.197 0.252 0.027
llama_3.2_3b_instruct 0.359 0.362 0.135
Qwen2-7B-Instruct 0.370 0.364 0.251
Qwen2-72B-Instruct 0.564 0.546 0.647
Qwen2.5-0.5B-Instruct 0.271 0.301 0.003
Qwen2.5-7B-Instruct 0.568 0.516 0.334
Qwen2.5-32B-Instruct 0.727 0.657 0.672
Qwen2.5-72B-Instruct 0.811 0.710 0.697
gpt-3.5-turbo-0125 0.217 0.282 0.082
gpt-4o-0513 0.669 0.599 0.512
gpt-4o-mini-2024-07-18 0.335 0.342 0.136
Mistral-Large-Instruct-2407 0.827 0.737 0.764
Mistral-Nemo-Instruct-2407 0.549 0.526 0.441
Mistral-Small-Instruct-2409 0.760 0.689 0.642
dummy 0.173 0.229 -0.009
Ordinal Cardinal

We leverage Schwartz's theory of Basic Personal Values, which defines 10 values Self-Direction, Stimulation, Hedonism, Achievement, Power, Security, Conformity, Tradition, Benevolence, Universalism), and the associated PVQ-40 and SVS questionnaires (available here).

Using the methodology from psychology, we focus on population-level (interpersonal) value stability, i.e. Rank-Order stability (RO stability). Rank-Order stability refers to the extent to which the order of different personas (in terms of expression of some value) remains the same along different contexts. Refer here or to our paper for more details.

In addition to Rank-Order stability we compute validity metrics (Stress, CFI, SRMR, RMSEA), which are a common practice in psychology. Validity refers to the extent to which the questionnaire measures what it purports to measure. It can be seen as the questionnaire's accuracy in measuring the intended factors, i.e. values. For example, basic personal values should be organized in a circular structure, and questions measuring the same value should be correlated. The table below additionally shows the validity metrics, refer here for more details.

We aggregate Rank-Order stability and validation metrics to rank the models. We do so in two ways: Cardinal and Ordinal. Following this paper, we compute the stability and diversity of those rankings. See here for more details.

To sum up here are the metrics used:

# Model Ordinal - Win rate (↑) Cardinal - Score (↑) RO Stability (↑) Stress (↓) CFI (↑) SRMR (↓) RMSEA (↓)
Ministrations-8B-v1 0.595 0.563 0.506 0.225 0.559 0.425 0.430
Cydonia-22B-v1.2 0.707 0.655 0.619 0.194 0.636 0.334 0.342
Nautilus-70B-v0.1 0.783 0.707 0.633 0.181 0.751 0.209 0.231
Ministral-8B-Instruct-2410 0.547 0.520 0.412 0.240 0.579 0.420 0.410
llama-3.1-nemotron-70B-instruct 0.871 0.752 0.717 0.162 0.756 0.212 0.238
hermes_3_llama_3.1_70b 0.535 0.480 0.259 0.229 0.649 0.310 0.310
hermes_3_llama_3.1_8b 0.422 0.412 0.165 0.253 0.582 0.353 0.344
gemma-2-2b-it 0.330 0.331 0.147 0.263 0.409 0.550 0.538
gemma-2-9b-it 0.708 0.602 0.438 0.201 0.754 0.240 0.248
gemma-2-27b-it 0.598 0.527 0.392 0.206 0.600 0.371 0.373
phi-3-mini-128k-instruct 0.301 0.330 0.039 0.282 0.586 0.425 0.397
phi-3-medium-128k-instruct 0.298 0.308 0.097 0.265 0.430 0.550 0.538
phi-3.5-mini-instruct 0.220 0.268 0.036 0.284 0.407 0.572 0.551
phi-3.5-MoE-instruct 0.358 0.361 0.110 0.274 0.553 0.425 0.403
Mistral-7B-Instruct-v0.1 0.198 0.266 0.027 0.283 0.389 0.556 0.530
Mistral-7B-Instruct-v0.2 0.315 0.321 0.144 0.265 0.380 0.573 0.548
Mistral-7B-Instruct-v0.3 0.234 0.266 0.080 0.274 0.314 0.624 0.608
Mixtral-8x7B-Instruct-v0.1 0.397 0.382 0.215 0.262 0.453 0.503 0.491
Mixtral-8x22B-Instruct-v0.1 0.311 0.315 0.141 0.255 0.377 0.581 0.584
command_r_plus 0.549 0.500 0.343 0.238 0.603 0.374 0.367
llama_3_8b_instruct 0.457 0.430 0.245 0.246 0.550 0.427 0.422
llama_3_70b_instruct 0.759 0.684 0.607 0.185 0.721 0.235 0.258
llama_3.1_8b_instruct 0.537 0.479 0.430 0.221 0.431 0.546 0.553
llama_3.1_70b_instruct 0.806 0.717 0.691 0.171 0.698 0.264 0.291
llama_3.1_405b_instruct_4bit 0.727 0.649 0.723 0.170 0.488 0.496 0.521
llama_3.2_1b_instruct 0.197 0.252 0.027 0.293 0.374 0.599 0.574
llama_3.2_3b_instruct 0.359 0.362 0.135 0.275 0.502 0.450 0.423
Qwen2-7B-Instruct 0.370 0.364 0.251 0.258 0.356 0.601 0.592
Qwen2-72B-Instruct 0.564 0.546 0.647 0.203 0.304 0.654 0.665
Qwen2.5-0.5B-Instruct 0.271 0.301 0.003 0.293 0.537 0.447 0.405
Qwen2.5-7B-Instruct 0.568 0.516 0.334 0.251 0.647 0.304 0.297
Qwen2.5-32B-Instruct 0.727 0.657 0.672 0.181 0.560 0.402 0.412
Qwen2.5-72B-Instruct 0.811 0.710 0.697 0.162 0.673 0.299 0.318
gpt-3.5-turbo-0125 0.217 0.282 0.082 0.287 0.387 0.600 0.572
gpt-4o-0513 0.669 0.599 0.512 0.192 0.624 0.345 0.344
gpt-4o-mini-2024-07-18 0.335 0.342 0.136 0.271 0.442 0.500 0.479
Mistral-Large-Instruct-2407 0.827 0.737 0.764 0.169 0.651 0.310 0.330
Mistral-Nemo-Instruct-2407 0.549 0.526 0.441 0.211 0.516 0.429 0.431
Mistral-Small-Instruct-2409 0.760 0.689 0.642 0.189 0.684 0.260 0.289
dummy 0.173 0.229 -0.009 0.293 0.376 0.622 0.592
Motivation and Methodology page Submit a model

You can find more details in our paper.

If you found this project useful, please cite one of our related papers, which this leaderboard extends with a more focused and elaborate experimental setup. Refer to the site for details.

Short paper: Kovač, G., Portelas, R., Sawayama, M., Dominey, P. F., & Oudeyer, P. Y. (2024). Stick to your Role! Stability of Personal Values Expressed in Large Language Models. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 46).

@inproceedings{kovavc2024stick, title={Stick to your Role! Stability of Personal Values Expressed in Large Language Models}, author={Kova{\v{c}}, Grgur and Portelas, R{\'e}my and Sawayama, Masataka and Dominey, Peter Ford and Oudeyer, Pierre-Yves}, booktitle={Proceedings of the Annual Meeting of the Cognitive Science Society}, volume={46}, year={2024} }

Longer paper: Kovač G, Portelas R, Sawayama M, Dominey PF, Oudeyer PY (2024) Stick to your role! Stability of personal values expressed in large language models. PLOS ONE 19(8): e0309114. https://doi.org/10.1371/journal.pone.0309114

@article{kovavc2024stick, title={Stick to your role! Stability of personal values expressed in large language models}, author={Kova{\v{c}}, Grgur and Portelas, R{\'e}my and Sawayama, Masataka and Dominey, Peter Ford and Oudeyer, Pierre-Yves}, journal={PloS one}, volume={19}, number={8}, pages={e0309114}, year={2024}, publisher={Public Library of Science San Francisco, CA USA} }