<aside> <img src="/icons/light-bulb_gray.svg" alt="/icons/light-bulb_gray.svg" width="40px" />
선정 기준:
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| OLMo: Accelerating the Science of Language Models | ACL 2024 | Best Theme Paper | 396 | 조승환 2026-01-27 |
| Not all tokens are what you need for pretraining | NeurIPS 2024 | Best Paper Runner-up | 137 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Iteration Head: A Mechanistic Study of Chain-of-Thought | NeurIPS 2024 | Poster | 22 | 박재현 |
| 2026-01-13 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Is dpo superior to ppo for llm alignment? a comprehensive study | ICML 2024 | Oral | 233 | 서성원 |
| 2026-01-13 | ||||
| Safety Alignment Should be Made More Than Just a Few Tokens Deep | ICLR 2025 | Oral | 231 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Steering Llama 2 via Contrastive Activation Addition | ACL 2024 | Outstanding Paper | 359 | 석진실 |
| 2026-01-20 | ||||
| Amortizing intractable inference in large language models | ICLR 2024 | Outstanding Paper | 97 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Proving Test Set Contamination in Black-Box Language Models | ICLR 2024 | Outstanding Paper | 207 | |
| Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models | ACL 2024 | Outstanding Paper | 170 | 양유진1/20 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs | ICLR 2024 | Outstanding Paper | 368 | |
| Faster Cascades via Speculative Decoding | ICLR 2025 | Outstanding Paper | 22 |
| Name | Conference | Notes | Citation | 발표자 |
|---|---|---|---|---|
| Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model | ACL 2024 | Best Paper | 305 |