看Linux内核前辈解Bug你会有什么感想

看Linux内核前辈解Bug你会有什么感想

本文转载自微信公众号「相遇Linux」,作者JeffXie。转载本文请联系相遇Linux公众号。

一名叫John Garry的人给Linux内核社区上报了一个bug, 说有人升级内核到V5.10-rcX之后,执行dd和sync之后看到进程有hang住的情况:

Some guys internally upgraded to v5.10-rcX and start to see a hang after  
dd + sync for a large file: 
- mount /dev/sda1 (ext4 filesystem) to directory /mnt; 
- run "if=/dev/zero of=test1 bs=1M count=2000" on directory /mnt; 
- run "sync"

系统提示打印hang住进程的堆栈信息:

and get: 
[  367.912761] INFO: task jbd2/sdb1-8:3602 blocked for more than 120 
seconds. 
[  367.919618]       Not tainted 5.10.0-rc1-109488-g32ded76956b6 #948 
[  367.925776] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message. 
[  367.933579] task:jbd2/sdb1-8     state:D stack:    0 pid: 3602 
ppid:     2 flags:0x00000028 
[  367.941901] Call trace: 
[  367.944351] __switch_to+0xb8/0x168 
[  367.947840] __schedule+0x30c/0x670 
[  367.951326] schedule+0x70/0x108 
[  367.954550] io_schedule+0x1c/0xe8 
[  367.957948] bit_wait_io+0x18/0x68 
[  367.961346] __wait_on_bit+0x78/0xf0 
[  367.964919] out_of_line_wait_on_bit+0x8c/0xb0 
[  367.969356] __wait_on_buffer+0x30/0x40 
[  367.973188] jbd2_journal_commit_transaction+0x1370/0x1958 
[  367.978661] kjournald2+0xcc/0x260 
[  367.982061] kthread+0x150/0x158 
[  367.985288] ret_from_fork+0x10/0x34 
[  367.988860] INFO: task sync:3823 blocked for more than 120 seconds. 
[  367.995102]       Not tainted 5.10.0-rc1-109488-g32ded76956b6 #948 
[  368.001265] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message. 
[  368.009067] task:sync            state:D stack:    0 pid: 3823 ppid: 
3450 flags:0x00000009 
[  368.017397] Call trace: 
[  368.019841] __switch_to+0xb8/0x168 
[  368.023320] __schedule+0x30c/0x670 
[  368.026804] schedule+0x70/0x108 
[  368.030025] jbd2_log_wait_commit+0xbc/0x158 
[  368.034290] ext4_sync_fs+0x188/0x1c8 
[  368.037947] sync_fs_one_sb+0x30/0x40 
[  368.041606] iterate_supers+0x9c/0x138 
[  368.045350] ksys_sync+0x64/0xc0 
[  368.048569] __arm64_sys_sync+0x10/0x20 
[  368.052398] el0_svc_common.constprop.3+0x68/0x170 
[  368.057177] do_el0_svc+0x24/0x90 
[  368.060482] el0_sync_handler+0x118/0x168 
[  368.064478]  el0_sync+0x158/0x180

并反馈8号硬队列绑定的第100号cpu上派发队列和完成队列不一致,但是没有inflight的情况,因为这个情况,苦恼了三天三夜,头发掉满了整个键盘。

estuary:/sys/kernel/debug/block/sda/hctx8$ cat cpu100/dispatched 
3 0 
estuary:/sys/kernel/debug/block/sda/hctx8$ cat cpu100/completed 
2 0 
On cpu100, it seems completed is less than number dispatched. 
/sys/devices/pci0000:74/0000:74:02.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda/sda1/inflight),  
the number of inflight is 0.

然后Ming Lei (自带音响出场)给出意见,需要收集/sys/kernel/debug/block/sda 目录和/sys/block/sda/device 目录下所有文件的输出:

hello chenxiang, Can you collect the debugfs log via the following commands after the io 
hang is triggered? 
1) debugfs log: 
        (cd /sys/kernel/debug/block/sda && find . -type f -exec grep -aH . {} ;) 
2) scsi sysfs info: 
        (cd /sys/block/sda/device && find . -type f -exec grep -aH . {} ;) 
Suppose the disk is /dev/sda.

chenxiang 反馈:

~$ cd /sys/kernel/debug/block/sdb && find . -type f -exec grep  
-aH . {} ; 
... 
./hctx9/tags:cleared=3891 
./hctx9/tags:bits_per_word=64 
./hctx9/tags:map_nr=63 
./hctx9/tags:alloc_hint={3264, 3265, 0, 3731, 2462, 842, 0, 0, 1278, 27,  
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2424, 0, 0, 0,  
346, 3, 3191, 235, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 88, 0, 0, 285, 0, 0, 0, 0, 0, 0, 0, 0, 0,  
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1165, 538, 0,  
372, 277, 3476, 0, 0, 0, 111, 0, 2081, 0, 112, 0, 0, 0, 0, 904, 1127, 0,  
0, 0, 113, 0, 0, 0, 0, 0, 0, 321, 0} 
./hctx9/tags:wake_batch=8 
./hctx9/tags:wake_index=7 
./hctx9/tags:ws_active=0 
./hctx9/tags:ws={ 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:   {.wait_cnt=8, .wait=inactive}, 
./hctx9/tags:} 
./hctx9/tags:round_robin=1 
./hctx9/tags:min_shallow_depth=4294967295 
./hctx9/ctx_map:00000000: 00 
...

Ming Lei 大神看了一下debug输出,小喝一杯大红袍,就直接建议去测试这个patch:

Please try the following patch: 
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c 
index 60c7a7d74852..03c6d0620bfd 100644 
--- a/drivers/scsi/scsi_lib.c 
+++ b/drivers/scsi/scsi_lib.c 
@@ -1703,8 +1703,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, 
         break; 
     case BLK_STS_RESOURCE: 
     case BLK_STS_ZONE_RESOURCE: 
-        if (atomic_read(&sdev->device_busy) || 
-            scsi_device_blocked(sdev)) 
+        if (scsi_device_blocked(sdev)) 
             ret = BLK_STS_DEV_RESOURCE; 
         break; 
     default:

chenxiang 测试反馈之后,问题解决,内心洋溢着Linux从业者单纯的喜悦,眼神中流露出满满的崇敬之情,恨不得以身相许:

I have tested the patch  100+ times on two envirments(on which the issue  
occurs frequently before), and 
the issue doesn't occur.

这样问题就解决了,我想如果是小编处理这个问题,我会要他触发一下crash,然后再慢慢分析,真是low了几百个 levels :)

正是只凭借这些debug信息,准确分析问题所在,才能体现出对block子系统的专业性,坐稳在这个子系统中的地位。如此熟练掌握一个子系统,真不易!

本文来自相遇Linux,经授权后发布,本文观点不代表梦泽贝壳立场,转载请联系原作者。

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
小 编的头像小 编新注册
上一篇 2020年12月7日 下午10:25
下一篇 2020年12月8日 下午9:48

相关推荐

发表回复

登录后才能评论