CVE Details
Description
In the Linux kernel, the following vulnerability has been resolved:\nblk-rq-qos: fix crash on rq_qos_wait vs. rq_qos_wake_function race\nWe're seeing crashes from rq_qos_wake_function that look like this:\nBUG: unable to handle page fault for address: ffffafe180a40084\n#PF: supervisor write access in kernel mode\n#PF: error_code(0x0002) - not-present page\nPGD 100000067 P4D 100000067 PUD 10027c067 PMD 10115d067 PTE 0\nOops: Oops: 0002 [#1] PREEMPT SMP PTI\nCPU: 17 UID: 0 PID: 0 Comm: swapper/17 Not tainted 6.12.0-rc3-00013-geca631b8fe80 #11\nHardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014\nRIP: 0010:_raw_spin_lock_irqsave+0x1d/0x40\nCode: 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 0f 1f 44 00 00 41 54 9c 41 5c fa 65 ff 05 62 97 30 4c 31 c0 ba 01 00 00 00 0f b1 17 75 0a 4c 89 e0 41 5c c3 cc cc cc cc 89 c6 e8 2c 0b 00\nRSP: 0018:ffffafe180580ca0 EFLAGS: 00010046\nRAX: 0000000000000000 RBX: ffffafe180a3f7a8 RCX: 0000000000000011\nRDX: 0000000000000001 RSI: 0000000000000003 RDI: ffffafe180a40084\nRBP: 0000000000000000 R08: 00000000001e7240 R09: 0000000000000011\nR10: 0000000000000028 R11: 0000000000000888 R12: 0000000000000002\nR13: ffffafe180a40084 R14: 0000000000000000 R15: 0000000000000003\nFS: 0000000000000000(0000) GS:ffff9aaf1f280000(0000) knlGS:0000000000000000\nCS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033\nCR2: ffffafe180a40084 CR3: 000000010e428002 CR4: 0000000000770ef0\nDR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000\nDR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400\nPKRU: 55555554\nCall Trace:\n\ntry_to_wake_up+0x5a/0x6a0\nrq_qos_wake_function+0x71/0x80\n__wake_up_common+0x75/0xa0\n__wake_up+0x36/0x60\nscale_up.part.0+0x50/0x110\nwb_timer_fn+0x227/0x450\n...\nSo rq_qos_wake_function() calls wake_up_process(data->task), which calls\ntry_to_wake_up(), which faults in raw_spin_lock_irqsave(&p->pi_lock).\np comes from data->task, and data comes from the waitqueue entry, which\nis stored on the waiter's stack in rq_qos_wait(). Analyzing the core\ndump with drgn, I found that the waiter had already woken up and moved\non to a completely unrelated code path, clobbering what was previously\ndata->task. Meanwhile, the waker was passing the clobbered garbage in\ndata->task to wake_up_process(), leading to the crash.\nWhat's happening is that in between rq_qos_wake_function() deleting the\nwaitqueue entry and calling wake_up_process(), rq_qos_wait() is finding\nthat it already got a token and returning. The race looks like this:\nrq_qos_wait() rq_qos_wake_function()\n==============================================================\nprepare_to_wait_exclusive()\ndata->got_token = true;\nlist_del_init(&curr->entry);\nif (data.got_token)\nbreak;\nfinish_wait(&rqw->wait, &data.wq);\n^- returns immediately because\nlist_empty_careful(&wq_entry->entry)\nis true\n... return, go do something else ...\nwake_up_process(data->task)\n(NO LONGER VALID!)-^\nNormally, finish_wait() is supposed to synchronize against the waker.\nBut, as noted above, it is returning immediately because the waitqueue\nentry has already been removed from the waitqueue.\nThe bug is that rq_qos_wake_function() is accessing the waitqueue entry\nAFTER deleting it. Note that autoremove_wake_function() wakes the waiter\nand THEN deletes the waitqueue entry, which is the proper order.\nFix it by swapping the order. We also need to use\nlist_del_init_careful() to match the list_empty_careful() in\nfinish_wait().
See more information about CVE-2024-50082 from MITRE CVE dictionary and NIST NVD
NOTE: The following CVSS metrics and score provided are preliminary and subject to review.
CVSS v3 metrics
Base Score: | 5.5 |
Vector String: | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H |
Version: | 3.1 |
Attack Vector: | Local |
Attack Complexity: | Low |
Privileges Required: | Low |
User Interaction: | None |
Scope: | Unchanged |
Confidentiality: | None |
Integrity: | None |
Availability: | High |
Errata information