记录linux tty的一次软锁排查2
在复现tty的死锁问题的时候,文洋兄使用了如下的方式:
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h> #define TIOCVHANGUP 0x5437
int main(int argc,char* argv[])
{
int fd;
if(argc < )
{
printf("error,you should input tty as a parameter\r\n");
return ;
}
fd = open(argv[], O_WRONLY | O_NOCTTY);
if(fd<0)
{
return 1;
}
write(fd, "test tty\n ", );
ioctl(fd, TIOCVHANGUP, );
//sleep(1);
close(fd);
return ;
}
编译成gcc -g -o main.o main.c ,然后使用脚本呼叫:
#!/bin/bash
while [ ]
do
./main.o /dev/tty4
done
之所以使用脚本而不是在c中while处理,是因为在进程exit的时候,会有些tty的处理,我们希望尽可能地覆盖测试,所以甚至都没有加sleep来延时。
结果复现出来下面的软锁故障,堆栈如下:
[517571.855382] INFO: task systemd: blocked for more than seconds.
[517571.856127] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[517571.856846] systemd D ffff881fffc347c0 0x00000000
[517571.856852] ffff881fd35c7b50 ffff881fd35c7fd8 ffff881fd35c7fd8
[517571.856859] ffff881fd35c7fd8 00000000000147c0 ffff881fd313c500 ffff883f5ee2ac80
[517571.856863] ffff883f5ee2ac84 ffff883fd1630000 00000000ffffffff ffff883f5ee2ac88
[517571.856867] Call Trace:
[517571.856880] [<ffffffff8163f959>] schedule_preempt_disabled+0x29/0x70
[517571.856883] [<ffffffff8163d415>] __mutex_lock_slowpath+0xc5/0x1c0
[517571.856888] [<ffffffff8163c87f>] mutex_lock+0x1f/0x2f
[517571.856890] [<ffffffff81640df8>] tty_lock_nested.isra.+0x38/0x90
[517571.856892] [<ffffffff81640e5e>] tty_lock+0xe/0x10
[517571.856899] [<ffffffff813b204c>] tty_open+0xcc/0x620
[517571.856906] [<ffffffff811e5721>] chrdev_open+0xa1/0x1e0
[517571.856912] [<ffffffff811de657>] do_dentry_open+0x1a7/0x2e0
[517571.856916] [<ffffffff811e5680>] ? cdev_put+0x30/0x30
[517571.856918] [<ffffffff811de889>] vfs_open+0x39/0x70
[517571.856922] [<ffffffff811ede7d>] do_last+0x1ed/0x1270
[517571.856925] [<ffffffff811f0be2>] path_openat+0xc2/0x490
[517571.856930] [<ffffffff810afb68>] ? __wake_up_common+0x58/0x90
[517571.856935] [<ffffffff811f23ab>] do_filp_open+0x4b/0xb0
[517571.856941] [<ffffffff811fef47>] ? __alloc_fd+0xa7/0x130
[517571.856945] [<ffffffff811dfd53>] do_sys_open+0xf3/0x1f0
[517571.856949] [<ffffffff811dfe6e>] SyS_open+0x1e/0x20
[517571.856955] [<ffffffff81649909>] system_call_fastpath+0x16/0x1b
从堆栈看,显然又是在等锁超时了。反汇编找到这把锁是关键。
void __lockfunc tty_lock(struct tty_struct *tty)
{
return tty_lock_nested(tty, TTY_MUTEX_NORMAL);
}
static void __lockfunc tty_lock_nested(struct tty_struct *tty,
unsigned int subclass)
{
if (tty->magic != TTY_MAGIC) {
pr_err("L Bad %p\n", tty);
WARN_ON();
return;
}
tty_kref_get(tty);
mutex_lock_nested(&tty->legacy_mutex, subclass);--------------传入锁的指针
}
由于CONFIG_DEBUG_LOCK_ALLOC并没有配置,所以mutex_lock_nested就是mutex_lock。和堆栈是匹配的。
# define mutex_lock_nested(lock, subclass) mutex_lock(lock)
crash> dis -l tty_lock_nested
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_mutex.c:
0xffffffff81640dc0 <tty_lock_nested>: nopl 0x0(%rax,%rax,) [FTRACE NOP]
0xffffffff81640dc5 <tty_lock_nested+>: push %rbp
0xffffffff81640dc6 <tty_lock_nested+>: mov %rsp,%rbp
0xffffffff81640dc9 <tty_lock_nested+>: push %rbx
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_mutex.c:
0xffffffff81640dca <tty_lock_nested+>: cmpl $0x5401,(%rdi)
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_mutex.c:
0xffffffff81640dd0 <tty_lock_nested+>: mov %rdi,%rbx
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_mutex.c:
0xffffffff81640dd3 <tty_lock_nested+>: jne 0xffffffff81640dfb <tty_lock_nested+>
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/include/linux/tty.h:
0xffffffff81640dd5 <tty_lock_nested+>: test %rdi,%rdi
0xffffffff81640dd8 <tty_lock_nested+>: je 0xffffffff81640dec <tty_lock_nested+>
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/arch/x86/include/asm/atomic.h:
0xffffffff81640dda <tty_lock_nested+>: mov $0x1,%eax
0xffffffff81640ddf <tty_lock_nested+>: lock xadd %eax,0x4(%rdi)
0xffffffff81640de4 <tty_lock_nested+>: add $0x1,%eax
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/include/linux/kref.h:
0xffffffff81640de7 <tty_lock_nested+>: cmp $0x1,%eax
0xffffffff81640dea <tty_lock_nested+>: jle 0xffffffff81640e1f <tty_lock_nested+>
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_mutex.c:
0xffffffff81640dec <tty_lock_nested+>: lea 0x80(%rbx),%rdi------------------传入的参数是一把锁的地址,即&tty->legacy_mutex,rbx就是tty的指针了。
0xffffffff81640df3 <tty_lock_nested+>: callq 0xffffffff8163c860 <mutex_lock>--------------------调用mutex_lock
crash> dis -l mutex_lock
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/kernel/mutex.c:
0xffffffff8163c860 <mutex_lock>: nopl 0x0(%rax,%rax,) [FTRACE NOP]
0xffffffff8163c865 <mutex_lock+>: push %rbp
0xffffffff8163c866 <mutex_lock+>: mov %rsp,%rbp
0xffffffff8163c869 <mutex_lock+>: push %rbx--------------------------------------------------rbx压栈,所以rbp后面就是rbx的值
所以我们能够通过堆栈分析出tty的指针来,rbx的压栈的位置是在rbp之后。
ffff881fd35c7bc0: ffff881fd35c7bd8 ffffffff8163c87f
# [ffff881fd35c7bc8] mutex_lock at ffffffff8163c87f
ffff881fd35c7bd0: ffff883f5ee2ac00 ffff881fd35c7bf0 -----------------------ffff883f5ee2ac00就是rbx的值,也就是tty指针
ffff881fd35c7be0: ffffffff81640df8
# [ffff881fd35c7be0] tty_lock_nested at ffffffff81640df8
ffff881fd35c7be8: ffff88211f6a3200 ffff881fd35c7c00
ffff881fd35c7bf8: ffffffff81640e5e
现在,需要找到持有这把锁的owner是谁。
crash> struct tty_struct.legacy_mutex ffff883f5ee2ac00
legacy_mutex = {
count = {
counter = -
},
wait_lock = {
{
rlock = {
raw_lock = {
{
head_tail = ,
tickets = {
head = ,
tail =
}
}
}
}
}
},
wait_list = {
next = 0xffff881fd35c7b70,
prev = 0xffff881fd35c7b70
},
owner = 0xffff880190f5c500, -----------------持有锁
查看对应的task:
crash> task 0xffff880190f5c500
PID: TASK: ffff880190f5c500 CPU: COMMAND: "main.o"------------就是我们编译的测试命令
确认下是不是我们的tty4.
crash> struct tty_strt.name ffff883f5ee2ac00
name = "tty4\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
确定无误后,看看进程打开的文件列表:
crash> files
PID: TASK: ffff880190f5c500 CPU: COMMAND: "main.o"
ROOT: / CWD: /home/caq
FD FILE DENTRY INODE TYPE PATH
ffff881f0e31a600 ffff880dd37f8000 ffff8801713fcea0 CHR /dev/pts/
ffff881f0e31a600 ffff880dd37f8000 ffff8801713fcea0 CHR /dev/pts/
ffff881f0e31a600 ffff880dd37f8000 ffff8801713fcea0 CHR /dev/pts/
ffff881a00324400 ffff883fd1010fc0 ffff883fd0b73820 CHR /dev/tty4
查看对应的tty的属性:
crash> struct file.private_data ffff881a00324400
private_data = 0xffff883f6101e840
crash> struct tty_file_private.tty 0xffff883f6101e840
tty = 0xffff883f5ee2ac00
crash> struct tty_struct.disc_data 0xffff883f5ee2ac00----------------这个 0xffff883f5ee2ac00 也就是在前面反汇编找到的tty指针
disc_data = 0xffff883f9a1d8c00
crash> struct n_tty_data.icanon 0xffff883f9a1d8c00 icanon = '\001'
当然也可以使用tty来直接查看。
最后殊途同归,还是同一个问题,属性导致的。
我们继续来看到底有多少进程被阻塞了:
# grep mutex_lock -A 5 -B 5 caq_all_bt.txt |grep tty_open |wc -l
90
# grep mutex_lock -A -B caq_all_bt.txt |grep tty_open
# [ffff881fd35c7c08] tty_open at ffffffff813b204c-----------------只有1号进程阻塞在这
# [ffff8820c222bc08] tty_open at ffffffff813b1ff7-----------------其余全部阻塞在这
# [ffff882aaa9b3c08] tty_open at ffffffff813b1ff7
# [ffff883f20ca7c08] tty_open at ffffffff813b1ff7
# [ffff882098d2bc08] tty_open at ffffffff813b1ff7
# [ffff88147ff87c08] tty_open at ffffffff813b1ff7
# [ffff8820ff4cbc08] tty_open at ffffffff813b1ff7
# [ffff88106e5c7c08] tty_open at ffffffff813b1ff7
# [ffff880192813c08] tty_open at ffffffff813b1ff7
# [ffff880164ccbc08] tty_open at ffffffff813b1ff7
# [ffff882093c13c08] tty_open at ffffffff813b1ff7
# [ffff8814221b7c08] tty_open at ffffffff813b1ff7
# [ffff883f3c74fc08] tty_open at ffffffff813b1ff7
# [ffff88136e433c08] tty_open at ffffffff813b1ff7
# [ffff882141f37c08] tty_open at ffffffff813b1ff7
# [ffff8820db4ebc08] tty_open at ffffffff813b1ff7
# [ffff88149471fc08] tty_open at ffffffff813b1ff7
# [ffff8801a4417c08] tty_open at ffffffff813b1ff7
# [ffff883f0acd3c08] tty_open at ffffffff813b1ff7
# [ffff883ebce9fc08] tty_open at ffffffff813b1ff7
# [ffff88208bfd3c08] tty_open at ffffffff813b1ff7
# [ffff882087d0bc08] tty_open at ffffffff813b1ff7
# [ffff8820d556bc08] tty_open at ffffffff813b1ff7
# [ffff8820c235bc08] tty_open at ffffffff813b1ff7
# [ffff8820e7ce3c08] tty_open at ffffffff813b1ff7
# [ffff88210c25fc08] tty_open at ffffffff813b1ff7
# [ffff8820ebe2fc08] tty_open at ffffffff813b1ff7
# [ffff8820e82c7c08] tty_open at ffffffff813b1ff7
# [ffff88212af2fc08] tty_open at ffffffff813b1ff7
# [ffff881ad4ef7c08] tty_open at ffffffff813b1ff7
# [ffff883f1a8afc08] tty_open at ffffffff813b1ff7
# [ffff88146efb3c08] tty_open at ffffffff813b1ff7
# [ffff8801c557fc08] tty_open at ffffffff813b1ff7
# [ffff88044e66fc08] tty_open at ffffffff813b1ff7
# [ffff8801664dbc08] tty_open at ffffffff813b1ff7
# [ffff8801a1fefc08] tty_open at ffffffff813b1ff7
# [ffff8801850c7c08] tty_open at ffffffff813b1ff7
# [ffff8801c6563c08] tty_open at ffffffff813b1ff7
# [ffff8801751dfc08] tty_open at ffffffff813b1ff7
# [ffff8801272fbc08] tty_open at ffffffff813b1ff7
# [ffff880173073c08] tty_open at ffffffff813b1ff7
# [ffff880179ccbc08] tty_open at ffffffff813b1ff7
# [ffff8813895f7c08] tty_open at ffffffff813b1ff7
# [ffff88152025fc08] tty_open at ffffffff813b1ff7
# [ffff88019e403c08] tty_open at ffffffff813b1ff7
# [ffff8801504f3c08] tty_open at ffffffff813b1ff7
# [ffff88017841fc08] tty_open at ffffffff813b1ff7
# [ffff88018e80fc08] tty_open at ffffffff813b1ff7
# [ffff881345b57c08] tty_open at ffffffff813b1ff7
# [ffff881f2c0ffc08] tty_open at ffffffff813b1ff7
# [ffff88049b78bc08] tty_open at ffffffff813b1ff7
# [ffff8801aff13c08] tty_open at ffffffff813b1ff7
# [ffff880186f77c08] tty_open at ffffffff813b1ff7
# [ffff8814fd963c08] tty_open at ffffffff813b1ff7
# [ffff8803d37dbc08] tty_open at ffffffff813b1ff7
# [ffff8801cacfbc08] tty_open at ffffffff813b1ff7
# [ffff8801d6937c08] tty_open at ffffffff813b1ff7
# [ffff8805689d3c08] tty_open at ffffffff813b1ff7
# [ffff883f8b9d7c08] tty_open at ffffffff813b1ff7
# [ffff883f7d873c08] tty_open at ffffffff813b1ff7
# [ffff8801fd47bc08] tty_open at ffffffff813b1ff7
# [ffff881387ecfc08] tty_open at ffffffff813b1ff7
# [ffff88145225fc08] tty_open at ffffffff813b1ff7
# [ffff88055235bc08] tty_open at ffffffff813b1ff7
# [ffff8803d2297c08] tty_open at ffffffff813b1ff7
# [ffff881432223c08] tty_open at ffffffff813b1ff7
# [ffff880d100cbc08] tty_open at ffffffff813b1ff7
# [ffff88018e9e3c08] tty_open at ffffffff813b1ff7
# [ffff8813879d7c08] tty_open at ffffffff813b1ff7
# [ffff88021a327c08] tty_open at ffffffff813b1ff7
# [ffff88021747bc08] tty_open at ffffffff813b1ff7
# [ffff88016bb43c08] tty_open at ffffffff813b1ff7
# [ffff880152223c08] tty_open at ffffffff813b1ff7
# [ffff8801acbcbc08] tty_open at ffffffff813b1ff7
# [ffff88018a2dfc08] tty_open at ffffffff813b1ff7
# [ffff88018821bc08] tty_open at ffffffff813b1ff7
# [ffff883ea5b9bc08] tty_open at ffffffff813b1ff7
# [ffff880242e8fc08] tty_open at ffffffff813b1ff7
# [ffff88136ce7fc08] tty_open at ffffffff813b1ff7
# [ffff880186217c08] tty_open at ffffffff813b1ff7
# [ffff8801685b3c08] tty_open at ffffffff813b1ff7
# [ffff883edb1bbc08] tty_open at ffffffff813b1ff7
# [ffff883efc4dfc08] tty_open at ffffffff813b1ff7
# [ffff8820ecaffc08] tty_open at ffffffff813b1ff7
# [ffff883e77557c08] tty_open at ffffffff813b1ff7
# [ffff8813dcbdfc08] tty_open at ffffffff813b1ff7
# [ffff8801544dfc08] tty_open at ffffffff813b1ff7
# [ffff8820d552fc08] tty_open at ffffffff813b1ff7
# [ffff8801dab0fc08] tty_open at ffffffff813b1ff7
# [ffff883fa1f83c08] tty_open at ffffffff813b1ff7
这90个中,只有一个是#6 [ffff881fd35c7c08] tty_open at ffffffff813b204c,其他都是阻塞在tty_open at ffffffff813b1ff7,根据反汇编的行号,说明89个进程在
mutex_lock(&tty_mutex);阻塞。这是一把大锁。
这89个进程阻塞的原因是1号进程拿到了tty_mutex这把大的互斥锁。
然后1号进程被阻塞在
if (tty) {
tty_lock(tty);--------------------1号进程阻塞在这,即阻塞在tty->legacy_mutex 锁。
retval = tty_reopen(tty); if (retval < ) { tty_unlock(tty); tty = ERR_PTR(retval); }
1号进程阻塞是因为5628进程,来看一下5628进程的堆栈:
# [ffff883edb11fbd0] __schedule at ffffffff8163df9b
# [ffff883edb11fc38] schedule at ffffffff8163e879
# [ffff883edb11fc48] schedule_timeout at ffffffff8163c329
# [ffff883edb11fcf8] ldsem_down_write at ffffffff8164061a
# [ffff883edb11fd68] tty_ldisc_lock_pair_timeout at ffffffff81640cd8
# [ffff883edb11fd98] tty_ldisc_hangup at ffffffff813b8dc4
# [ffff883edb11fdc0] __tty_hangup at ffffffff813b0594
# [ffff883edb11fe10] tty_ioctl at ffffffff813b2e55
# [ffff883edb11feb8] do_vfs_ioctl at ffffffff811f4465
# [ffff883edb11ff30] sys_ioctl at ffffffff811f46e1
# [ffff883edb11ff80] system_call_fastpath at ffffffff81649909
RIP: 00007f5438b3f537 RSP: 00007ffef141f478 RFLAGS:
RAX: RBX: ffffffff81649909 RCX: 00007f5438b39c90
RDX: RSI: RDI:
RBP: 00007ffef141f4a0 R8: 00007f5438e0ce80 R9:
R10: 00007ffef141f200 R11: R12:
R13: R14: 00007ffef141f580 R15:
ORIG_RAX: CS: SS: 002b
从下面的__tty_hangup的代码看出,调用tty_ldisc_hangup 前,因为调用了tty_lock(tty);那么确实持有了一把tty->legacy_mutex .
static void __tty_hangup(struct tty_struct *tty, int exit_session)
{
struct file *cons_filp = NULL;
struct file *filp, *f = NULL;
struct tty_file_private *priv;
int closecount = , n;
int refs; if (!tty)
return; spin_lock(&redirect_lock);
if (redirect && file_tty(redirect) == tty) {
f = redirect;
redirect = NULL;
}
spin_unlock(&redirect_lock); tty_lock(tty);-----------------------------------------加锁 /* some functions below drop BTM, so we need this bit */
set_bit(TTY_HUPPING, &tty->flags); /* inuse_filps is protected by the single tty lock,
this really needs to change if we want to flush the
workqueue with the lock held */
check_tty_count(tty, "tty_hangup"); spin_lock(&tty_files_lock);
/* This breaks for file handles being sent over AF_UNIX sockets ? */
list_for_each_entry(priv, &tty->tty_files, list) {
filp = priv->file;
if (filp->f_op->write == redirected_tty_write)
cons_filp = filp;
if (filp->f_op->write != tty_write)
continue;
closecount++;
__tty_fasync(-, filp, ); /* can't block */
filp->f_op = &hung_up_tty_fops;
}
spin_unlock(&tty_files_lock); refs = tty_signal_session_leader(tty, exit_session);
/* Account for the p->signal references we killed */
while (refs--)
tty_kref_put(tty); /*
* it drops BTM and thus races with reopen
* we protect the race by TTY_HUPPING
*/
tty_ldisc_hangup(tty);-----------------------阻塞,阻塞的原因上面已经描述了。 spin_lock_irq(&tty->ctrl_lock);
clear_bit(TTY_THROTTLED, &tty->flags);
clear_bit(TTY_PUSH, &tty->flags);
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
put_pid(tty->session);
put_pid(tty->pgrp);
tty->session = NULL;
tty->pgrp = NULL;
tty->ctrl_status = ;
spin_unlock_irq(&tty->ctrl_lock); /*
* If one of the devices matches a console pointer, we
* cannot just call hangup() because that will cause
* tty->count and state->count to go out of sync.
* So we just call close() the right number of times.
*/
if (cons_filp) {
if (tty->ops->close)
for (n = ; n < closecount; n++)
tty->ops->close(tty, cons_filp);
} else if (tty->ops->hangup)
(tty->ops->hangup)(tty);
/*
* We don't want to have driver/ldisc interactions beyond
* the ones we did here. The driver layer expects no
* calls after ->hangup() from the ldisc side. However we
* can't yet guarantee all that.
*/
set_bit(TTY_HUPPED, &tty->flags);
clear_bit(TTY_HUPPING, &tty->flags); tty_unlock(tty);-------------------------------导致没有走到这放锁。 if (f)
fput(f);
}
本来以为分析已经完成了,结果看了一下tty_ldisc_hangup的代码,又推翻了自己的判断。下面,我们先来看一下tty_ldisc_hangup运行到哪行代码。
crash> dis -l ffffffff813b8dc4
/usr/src/debug/kernel-3.10.-327.22..el7/linux-3.10.-327.22..el7.x86_64/drivers/tty/tty_ldisc.c:
0xffffffff813b8dc4 <tty_ldisc_hangup+>: cmpq $0x0,0x50(%rbx)
690行刚好就是tty_ldisc_lock_pair,也就是tty_ldisc_lock_pair_timeout(tty, tty2, MAX_SCHEDULE_TIMEOUT);
我们看下tty_ldisc_hangup的代码:
void tty_ldisc_hangup(struct tty_struct *tty)
{
struct tty_ldisc *ld;
int reset = tty->driver->flags & TTY_DRIVER_RESET_TERMIOS;
int err = ; tty_ldisc_debug(tty, "closing ldisc: %p\n", tty->ldisc); ld = tty_ldisc_ref(tty);
if (ld != NULL) {
if (ld->ops->flush_buffer)
ld->ops->flush_buffer(tty);
tty_driver_flush_buffer(tty);
if ((test_bit(TTY_DO_WRITE_WAKEUP, &tty->flags)) &&
ld->ops->write_wakeup)
ld->ops->write_wakeup(tty);
if (ld->ops->hangup)
ld->ops->hangup(tty);
tty_ldisc_deref(ld);
} wake_up_interruptible_poll(&tty->write_wait, POLLOUT);
wake_up_interruptible_poll(&tty->read_wait, POLLIN); tty_unlock(tty);------------------------这里明明释放了锁 /*
* Shutdown the current line discipline, and reset it to
* N_TTY if need be.
*
* Avoid racing set_ldisc or tty_ldisc_release
*/
tty_ldisc_lock_pair(tty, tty->link);--------------------690行,也就是tty_ldisc_lock_pair_timeout(tty, tty2, MAX_SCHEDULE_TIMEOUT);跟堆栈一致。
tty_lock(tty);--------------------------重新加上锁 if (tty->ldisc) { /* At this point we have a halted ldisc; we want to close it and
reopen a new ldisc. We could defer the reopen to the next
open but it means auditing a lot of other paths so this is
a FIXME */
if (reset == ) { if (!tty_ldisc_reinit(tty, tty->termios.c_line))
err = tty_ldisc_open(tty, tty->ldisc);
else
err = ;
}
/* If the re-open fails or we reset then go to N_TTY. The
N_TTY open cannot fail */
if (reset || err) {
BUG_ON(tty_ldisc_reinit(tty, N_TTY));
WARN_ON(tty_ldisc_open(tty, tty->ldisc));
}
}
tty_ldisc_enable_pair(tty, tty->link);
if (reset)
tty_reset_termios(tty); tty_ldisc_debug(tty, "re-opened ldisc: %p\n", tty->ldisc);
}
这说明,明明5628进程释放了tty->legacy_mutex啊,为什么1号进程的互斥锁的owner还指向它呢?这个留在下次单独对互斥信号来描述。
我们再次回到那把tty->legacy_mutex锁,
wait_list = {
next = 0xffff881fd35c7b70,
prev = 0xffff881fd35c7b70
},
list -s mutex_waiter.task 0xffff881fd35c7b70
ffff881fd35c7b70
task = 0xffff883fd1630000
ffff883f5ee2ac88
task = 0xffff880190f5c500
crash> task 0xffff883fd1630000
PID: TASK: ffff883fd1630000 CPU: COMMAND: "systemd"
crash> task 0xffff880190f5c500
PID: TASK: ffff880190f5c500 CPU: COMMAND: "main.o"
5628怎么可能既是owner,又是waiter呢?这个问题我们放到后面来解释。
crash> struct tty_struct.link ffff883f5ee2ac00
link = 0x0
所以后面的调用链就是:tty_ldisc_lock_pair(tty, tty->link);---->tty_ldisc_lock_pair_timeout(0xffff883f5ee2ac00,0,MAX_SCHEDULE_TIMEOUT)--->tty_ldisc_lock--->ldsem_down_write
5628阻塞在线路规程的锁,也就是tty->ldisc_sem,这个是一把读写锁,在没打开debug的情况下,是没有owner成员的。
crash> struct tty_struct.ldisc_sem ffff883f5ee2ac00
ldisc_sem = {
count = -,
wait_lock = {
raw_lock = {
{
head_tail = ,
tickets = {
head = ,
tail =
}
}
}
},
wait_readers = ,
read_wait = {
next = 0xffff8801846d3df0,
prev = 0xffff8801846d3df0
},
write_wait = {
next = 0xffff883edb11fd10,
prev = 0xffff883edb11fd10
}
}
要找到owner,又得人肉遍历堆栈了。和《记录linux tty的一次软锁排查》一样,也是占用了锁,但本来的意愿是占用200ms超时,由于属性被修改,导致了占用无限时间。
那么,很显然,这个测试脚本,可以测试《记录linux tty的一次软锁排查》中的修改是否已经ok。
修改脚本如下:
#!/bin/bash
while [ ]
do
for i in {..}
do
./main.o /dev/tty$i
done
done
之前,在未修改fd为noblock的时候,是必现,改完之后,暴力测试一天都正常。
下面,针对前面所说的为什么wait里面看到的task和owner是同一个这个问题,再进行下解释。
tty_init_dev初始化一个tty的时候,调用initialize_tty_struct------>mutex_init(&tty->legacy_mutex);---->__mutex_init,代码如下
__mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
{
atomic_set(&lock->count, );
spin_lock_init(&lock->wait_lock);
INIT_LIST_HEAD(&lock->wait_list);
mutex_clear_owner(lock);
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
lock->osq = NULL;
#endif debug_mutex_init(lock, name, key);
}
此时,lock的wait_list只包含一个头结点,也就是&lock->wait_list,也就是0xffff881fd35c7b70
此时如果用list -s mutex_waiter.task 0xffff881fd35c7b70 去查看,那么对应的task是NULL。
我们来看mutex_waiter的结构:
struct mutex_waiter {
struct list_head list;
struct task_struct *task;
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
#endif
};
本来lock->wait_list把mutex_waiter 串起来,而在struct mutex结构中,owner成员刚好就位于struct list_head wait_list的后面,所以当owner获取锁之后,设置owner指针,刚好
就是和mutex_waiter 中设置task一样,所以这次看到的互斥锁的list中,使用list方法查看,会出现owner和wait指向同一个task的现象。
记录linux tty的一次软锁排查2的更多相关文章
- 记录linux tty的一次软锁排查
本过程参照了某大侠的https://github.com/w-simon/debug/blob/master/tty_lock_cause_sytemd_hung , 当第二次出现的时候,还是排查了一 ...
- Smart210学习记录------linux串口驱动
转自:http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=27025492&id=327609 一.核心数据结构 串口驱动有 ...
- linux下创建和删除软、硬链接
linux下创建和删除软.硬链接 在Linux系统中,内核为每一个新创建的文件分配一个Inode(索引结点),每个文件都有一个惟一的inode号.文件属性保存在索引结点里,在访问文件时,索引结点被复制 ...
- Linux TTY框架【转】
本文转载自:http://ju.outofmemory.cn/entry/281168 1. 前言 由于串口的缘故,TTY是Linux系统中最普遍的一类设备,稍微了解Linux系统的同学,对它都不陌生 ...
- 通过登入IP记录Linux所有用户登录所操作的日志
通过登入IP记录Linux所有用户登录所操作的日志 对于Linux用户操作记录一般通过命令history来查看历史记录,但是如果在由于误操作而删除了重要的数据的情况下,history命令就不会有什么作 ...
- 记录Linux下安装elasticSearch时遇到的一些错误
记录Linux下安装elasticSearch时遇到的一些错误 http://blog.sina.com.cn/s/blog_c90ce4e001032f7w.html (2016-11-02 22: ...
- Linux TTY驱动--Serial Core层【转】
转自:http://blog.csdn.net/sharecode/article/details/9197567 版权声明:本文为博主原创文章,未经博主允许不得转载. 接上一节: Linux TTY ...
- 如何记录linux终端下的操作日志
如何记录linux终端下的操作日志 在linux终端下,为方便检查操作中可能出现的错误,以及避免屏幕滚屏的限制,我们可以把操作日志记录下来.常用的工具有 screen,script,以及tee等,通过 ...
- Linux tty驱动架构
Linux tty子系统包含:tty核心,tty线路规程和tty驱动.tty核心是对整个tty设备的抽象,对用户提供统一的接口,tty线路规程是对传输数据的格式化,tty驱动则是面向tty设备的硬件驱 ...
随机推荐
- 跟我一起读postgresql源码(八)——Executor(查询执行模块之——可优化语句的执行)
2.可优化语句的执行 可优化语句的共同特点是它们被查询编译器处理后都会生成査询计划树,这一类语句由执行器(Executor)处理.该模块对外提供了三个接口: ExecutorStart.Executo ...
- Heroku登录失败
Heoku 在国内,注册和登录是个大问题,不知道原来怎么注册上了,如今需要登录删除 app 就是删除不了.. 今天努力找了个vpn ,无奈还是登录不成功.https://id.heroku.com/l ...
- svn conflict 冲突解决
1. 同一处修改文件冲突 开发人员都知道代码管理工具是开发中一个必不可少的工具,这里也不废话详细介绍了.不管你个人喜欢git还是svn还是其他,但还有一大部分公司在使用svn做代码管理工具.这里详细介 ...
- 【漏洞分析】dedecms有前提前台任意用户密码修改
0x00 前言 早上浏览sec-news,发现锦行信息安全发布了一篇文章<[漏洞分析] 织梦前台任意用户密码修改>,看完之后就想着自己复现一下. 该漏洞的精髓是php的弱类型比较,'0. ...
- docker:(5)利用docker -v 和 Publish over SSH插件实现war包自动部署到docker
在 docker:(3)docker容器挂载宿主主机目录 中介绍了运行docker时的一个重要命令 -v sudo docker run -p : --name tomcat_xiao_volume ...
- 利用.Net自带的票据完成BaseController的未登陆自动跳转到登陆页功能
一:定义票据中要记录的字段类 /// <summary> /// 用户存在于浏览器端的身份票据(非持久) /// 非持久 FormsAuthenticationTicket 的isPers ...
- dos命令的小总结
DOS命令与批处理:目的:简单高效.为我们以后学习linux操作系统做准备进行DOS命令窗口: 运行---输入cmd主要包括目录操作类命令.磁盘操作类命令.文件操作类命令和其它命令 1.在d盘创建一个 ...
- unison+inotify实现数据双向同步
unison是一款跨windows/linux/MAC OS平台的文件同步工具,不仅支持本地对本地同步,也支持通过SSH.RSH和Socket等网络协议进行同步.更棒的是,unison支持双向同步操作 ...
- 【HTML_标签大全】
HTML标签大全 标签 描述 标签类型 备注 <!--...--> 定义注释 / 单标签 <!DOCTYPE> 定义文档类型 / 单标签 <head></he ...
- ZOj 3466 The Hive II
There is a hive in the village. Like this. There are 8 columns(from A to H) in this hive. Different ...