Linux进程调度与切换
2016-04-15
张超《Linux内核分析》MOOC课程http://mooc.study.163.com/course/USTC-1000029000
一、分析
进程调度的时机与进程切换
操作系统原理中介绍了大量进程调度算法,这些算法从实现的角度看仅仅是从运行队列中选择一个新进程,选择的过程中运用了不同的策略而已。对于理解操作系统的工作机制,反而是进程的调度时机与进程的切换机制更为关键。
进程调度的时机:
schedule()是个内核函数,不是内核函数。所以用户态的进程不能直接调用,只能间接调用。内核线程是只有内核态没有用户态的特殊进程。
1.中断处理过程(包括时钟中断、I/O中断、系统调用和异常)中,直接调用schedule(),或者返回用户态时根据need_resched标记调用schedule();
2.内核线程可以直接调用schedule()进行进程切换,也可以在中断处理过程中进行调度,也就是说内核线程作为一类的特殊的进程可以主动调度,也可以被动调度;
3.用户态进程无法实现主动调度,仅能通过陷入内核态后的某个时机点进行调度,即在中断处理过程中进行调度。
进程切换:
1.为了控制进程的执行,内核必须有能力挂起正在CPU上执行的进程,并恢复以前挂起的某个进程的执行,这叫做进程切换、任务切换、上下文切换;
2.挂起正在CPU上执行的进程,与中断时保存现场是不同的,中断前后是在同一个进程上下文中,只是由用户态转向内核态执行;
3.进程上下文包含了进程执行需要的所有信息
I 用户地址空间:包括程序代码,数据,用户堆栈等 II 控制信息:进程描述符,内核堆栈等
III 硬件上下文(注意中断也要保存硬件上下文只是保存的方法不同)
4.schedule()函数选择一个新的进程来运行,并调用context_switch进行上下文的切换,这个宏调用switch_to来进行关键上下文切换
schedule 在/linux-3.18.6/kernel/sched/core.c
/*
2734 * __schedule() is the main scheduler function.
2735 *
2736 * The main means of driving the scheduler and thus entering this function are:
2737 *
2738 * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
2739 *
2740 * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
2741 * paths. For example, see arch/x86/entry_64.S.
2742 *
2743 * To drive preemption between tasks, the scheduler sets the flag in timer
2744 * interrupt handler scheduler_tick().
2745 *
2746 * 3. Wakeups don't really cause entry into schedule(). They add a
2747 * task to the run-queue and that's it.
2748 *
2749 * Now, if the new task added to the run-queue preempts the current
2750 * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
2751 * called on the nearest possible occasion:
2752 *
2753 * - If the kernel is preemptible (CONFIG_PREEMPT=y):
2754 *
2755 * - in syscall or exception context, at the next outmost
2756 * preempt_enable(). (this might be as soon as the wake_up()'s
2757 * spin_unlock()!)
2758 *
2759 * - in IRQ context, return from interrupt-handler to
2760 * preemptible context
2761 *
2762 * - If the kernel is not preemptible (CONFIG_PREEMPT is not set)
2763 * then at the next:
2764 *
2765 * - cond_resched() call
2766 * - explicit schedule() call
2767 * - return from syscall or exception to user-space
2768 * - return from interrupt-handler to user-space
2769 */static void __sched __schedule(void)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
struct rq *rq;
int cpu; 2777need_resched:
preempt_disable();
cpu = smp_processor_id();
rq = cpu_rq(cpu);
rcu_note_context_switch(cpu);
prev = rq->curr; schedule_debug(prev); if (sched_feat(HRTICK))
hrtick_clear(rq); /*
2790 * Make sure that signal_pending_state()->signal_pending() below
2791 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
2792 * done by the caller to avoid the race with signal_wake_up().
2793 */
smp_mb__before_spinlock();
raw_spin_lock_irq(&rq->lock); switch_count = &prev->nivcsw;
if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
if (unlikely(signal_pending_state(prev->state, prev))) {
prev->state = TASK_RUNNING;
} else {
deactivate_task(rq, prev, DEQUEUE_SLEEP);
prev->on_rq = ; /*
2806 * If a worker went to sleep, notify and ask workqueue
2807 * whether it wants to wake up a task to maintain
2808 * concurrency.
2809 */
if (prev->flags & PF_WQ_WORKER) {
struct task_struct *to_wakeup; to_wakeup = wq_worker_sleeping(prev, cpu);
if (to_wakeup)
try_to_wake_up_local(to_wakeup);
}
}
switch_count = &prev->nvcsw;
} if (task_on_rq_queued(prev) || rq->skip_clock_update < )
update_rq_clock(rq); next = pick_next_task(rq, prev);
clear_tsk_need_resched(prev);
clear_preempt_need_resched();
rq->skip_clock_update = ; if (likely(prev != next)) {
rq->nr_switches++;
rq->curr = next;
++*switch_count; context_switch(rq, prev, next); /* unlocks the rq */
/*
2836 * The context switch have flipped the stack from under us
2837 * and restored the local variables which were saved when
2838 * this task called schedule() in the past. prev == current
2839 * is still correct, but it can be moved to another cpu/rq.
2840 */
cpu = smp_processor_id();
rq = cpu_rq(cpu);
} else
raw_spin_unlock_irq(&rq->lock); post_schedule(rq); sched_preempt_enable_no_resched();
if (need_resched())
goto need_resched;
}
schedule
我们看其中的两个,第一是第2824行的next = pick_next_task(rq, prev); //完成找到下一个进程
第二是第2834行的context_switch(rq, prev, next); /* unlocks the rq */ //完成切换
I next = pick_next_task(rq, prev);//进程调度算法都封装这个函数内部
pick_next_stack在/linux-3.18.6/kernel/sched/core.c
/*
2695 * Pick up the highest-prio task:
2696 */static inline struct task_struct *
2698pick_next_task(struct rq *rq, struct task_struct *prev)
{
const struct sched_class *class = &fair_sched_class;
struct task_struct *p; /*
2704 * Optimization: we know that if all tasks are in
2705 * the fair class we can call that function directly:
2706 */
if (likely(prev->sched_class == class &&
rq->nr_running == rq->cfs.h_nr_running)) {
p = fair_sched_class.pick_next_task(rq, prev);
if (unlikely(p == RETRY_TASK))
goto again; /* assumes fair_sched_class->next == idle_sched_class */
if (unlikely(!p))
p = idle_sched_class.pick_next_task(rq, prev); return p;
} 2720again:
for_each_class(class) {
p = class->pick_next_task(rq, prev);
if (p) {
if (unlikely(p == RETRY_TASK))
goto again;
return p;
}
} BUG(); /* the idle class will always have a runnable task */
}
pick_next_stack
II context_switch(rq, prev, next);//进程上下文切换,切换到新的内存和新的寄存器状态
context_switch在 /linux-3.18.6/kernel/sched/core.c
/*
2332 * context_switch - switch to the new MM and the new
2333 * thread's register state.
2334 */static inline void
2336context_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next)
{
struct mm_struct *mm, *oldmm; prepare_task_switch(rq, prev, next); mm = next->mm;
oldmm = prev->active_mm;
/*
2346 * For paravirt, this is coupled with an exit in switch_to to
2347 * combine the page table reload and the switch backend into
2348 * one hypercall.
2349 */
arch_start_context_switch(prev); if (!mm) {
next->active_mm = oldmm;
atomic_inc(&oldmm->mm_count);
enter_lazy_tlb(oldmm, next);
} else
switch_mm(oldmm, mm, next); if (!prev->mm) {
prev->active_mm = NULL;
rq->prev_mm = oldmm;
}
/*
2364 * Since the runqueue lock will be released by the next
2365 * task (which is an invalid locking op but in the case
2366 * of the scheduler it's an obvious special-case), so we
2367 * do an early lockdep release here:
2368 */
spin_release(&rq->lock.dep_map, , _THIS_IP_); context_tracking_task_switch(prev, next);
/* Here we just switch the register state and the stack. */
switch_to(prev, next, prev); barrier();
/*
2377 * this_rq must be evaluated again because prev may have moved
2378 * CPUs since it called schedule(), thus the 'rq' on its stack
2379 * frame will be invalid.
2380 */
finish_task_switch(this_rq(), prev);
}
context_switch
其中的第2341行的prepare_task_switch(rq, prev, next); //完成切换前的准备工作
第2373行的switch_to(prev, next, prev); //完成切换
III switch_to利用了prev和next两个参数:prev指向当前进程,next指向被调度的进程
在/linux-3.18.6/arch/x86/include/asm/switch_to.h
#ifndef _ASM_X86_SWITCH_TO_H
#define _ASM_X86_SWITCH_TO_H
struct task_struct; /* one of the stranger aspects of C forward declarations */
5__visible struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next);
7struct tss_struct;
8void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss); #ifdef CONFIG_X86_32 #ifdef CONFIG_CC_STACKPROTECTOR
#define __switch_canary \
"movl %P[task_canary](%[next]), %%ebx\n\t" \
"movl %%ebx, "__percpu_arg([stack_canary])"\n\t"
#define __switch_canary_oparam \
, [stack_canary] "=m" (stack_canary.canary)
#define __switch_canary_iparam \
, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
#else /* CC_STACKPROTECTOR */
#define __switch_canary
#define __switch_canary_oparam
#define __switch_canary_iparam
#endif /* CC_STACKPROTECTOR */ /*
28 * Saving eflags is important. It switches not only IOPL between tasks,
29 * it also protects other tasks from NT leaking through sysenter etc.
30 */
#define switch_to(prev, next, last) \do { \
/* \
34 * Context-switching clobbers all registers, so we clobber \
35 * them explicitly, via unused output variables. \
36 * (EAX and EBP is not listed because EBP is saved/restored \
37 * explicitly for wchan access and EAX is the return value of \
38 * __switch_to()) \
39 */ \
unsigned long ebx, ecx, edx, esi, edi; \
\
asm volatile("pushfl\n\t" /* save flags */ \
"pushl %%ebp\n\t" /* save EBP */ \
"movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
"movl %[next_sp],%%esp\n\t" /* restore ESP */ \
"movl $1f,%[prev_ip]\n\t" /* save EIP */ \
"pushl %[next_ip]\n\t" /* restore EIP */ \
__switch_canary \
"jmp __switch_to\n" /* regparm call */ \
"1:\t" \
"popl %%ebp\n\t" /* restore EBP */ \
"popfl\n" /* restore flags */ \
\
/* output parameters */ \
: [prev_sp] "=m" (prev->thread.sp), \
[prev_ip] "=m" (prev->thread.ip), \
"=a" (last), \
\
/* clobbered output registers: */ \
"=b" (ebx), "=c" (ecx), "=d" (edx), \
"=S" (esi), "=D" (edi) \
\
__switch_canary_oparam \
\
/* input parameters: */ \
: [next_sp] "m" (next->thread.sp), \
[next_ip] "m" (next->thread.ip), \
\
/* regparm parameters for __switch_to(): */ \
[prev] "a" (prev), \
[next] "d" (next) \
\
__switch_canary_iparam \
\
: /* reloaded segment registers */ \
"memory"); \
} while () #else /* CONFIG_X86_32 */ /* frame pointer must be last for get_wchan */
#define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
#define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t" #define __EXTRA_CLOBBER \
, "rcx", "rbx", "rdx", "r8", "r9", "r10", "r11", \
"r12", "r13", "r14", "r15" #ifdef CONFIG_CC_STACKPROTECTOR
#define __switch_canary \
"movq %P[task_canary](%%rsi),%%r8\n\t" \
"movq %%r8,"__percpu_arg([gs_canary])"\n\t"
#define __switch_canary_oparam \
, [gs_canary] "=m" (irq_stack_union.stack_canary)
#define __switch_canary_iparam \
, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
#else /* CC_STACKPROTECTOR */
#define __switch_canary
#define __switch_canary_oparam
#define __switch_canary_iparam
#endif /* CC_STACKPROTECTOR */ /* Save restore flags to clear handle leaking NT */
#define switch_to(prev, next, last) \
asm volatile(SAVE_CONTEXT \
"movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
"movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
"call __switch_to\n\t" \
"movq "__percpu_arg([current_task])",%%rsi\n\t" \
__switch_canary \
"movq %P[thread_info](%%rsi),%%r8\n\t" \
"movq %%rax,%%rdi\n\t" \
"testl %[_tif_fork],%P[ti_flags](%%r8)\n\t" \
"jnz ret_from_fork\n\t" \
RESTORE_CONTEXT \
: "=a" (last) \
__switch_canary_oparam \
: [next] "S" (next), [prev] "D" (prev), \
[threadrsp] "i" (offsetof(struct task_struct, thread.sp)), \
[ti_flags] "i" (offsetof(struct thread_info, flags)), \
[_tif_fork] "i" (_TIF_FORK), \
[thread_info] "i" (offsetof(struct task_struct, stack)), \
[current_task] "m" (current_task) \
__switch_canary_iparam \
: "memory", "cc" __EXTRA_CLOBBER) #endif /* CONFIG_X86_32 */ #endif /* _ASM_X86_SWITCH_TO_H */
switch_to
完成进程切换
二、分析进程切换:我们用switch_to中的部分代码分析
/*
28 * Saving eflags is important. It switches not only IOPL between tasks,
29 * it also protects other tasks from NT leaking through sysenter etc.
30 */
#define switch_to(prev, next, last) \do { \
/* \
34 * Context-switching clobbers all registers, so we clobber \
35 * them explicitly, via unused output variables. \
36 * (EAX and EBP is not listed because EBP is saved/restored \
37 * explicitly for wchan access and EAX is the return value of \
38 * __switch_to()) \
39 */ \
unsigned long ebx, ecx, edx, esi, edi; \
\
asm volatile("pushfl\n\t" /* save flags */ \
"pushl %%ebp\n\t" /* save EBP */ \
"movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
"movl %[next_sp],%%esp\n\t" /* restore ESP */ \
"movl $1f,%[prev_ip]\n\t" /* save EIP */ \
"pushl %[next_ip]\n\t" /* restore EIP */ \
__switch_canary \
"jmp __switch_to\n" /* regparm call */ \
"1:\t" \
"popl %%ebp\n\t" /* restore EBP */ \
"popfl\n" /* restore flags */ \
\
/* output parameters */ \
: [prev_sp] "=m" (prev->thread.sp), \
[prev_ip] "=m" (prev->thread.ip), \
"=a" (last), \
\
/* clobbered output registers: */ \
"=b" (ebx), "=c" (ecx), "=d" (edx), \
"=S" (esi), "=D" (edi) \
\
__switch_canary_oparam \
\
/* input parameters: */ \
: [next_sp] "m" (next->thread.sp), \
[next_ip] "m" (next->thread.ip), \
\
/* regparm parameters for __switch_to(): */ \
[prev] "a" (prev), \
[next] "d" (next) \
\
__switch_canary_iparam \
\
: /* reloaded segment registers */ \
"memory"); \
} while ()
利用了prev和next两个参数:prev指向当前进程,当前进程用X表示。next指向被调度的进程,即下一个进程,用Y表示。至于如何实现调度,看pick_next_task。
看第42行:把flags压入到当前进程X的栈里面,保存flags。
看第43行:把当前的ebp压入当前进程X的栈里,保存ebp。
看第44行:把当前的esp保存到当前进程X的thread.sp里面。其中[prev_sp]是个标识,他在第55行,代替的是prev->thread.sp。
看第45行:把下一个进行Y的thread.sp赋值给esp,这一步实现把本来指向X的栈指针esp,现在指向了Y。其中[next_sp]如上所述,在第66行。
看第46行:把50行的位置存到X进程的thread_ip里面,保存eip。下一次可以从50行开始执行。其中[prev_ip]如上所述,在第56行。
看第47行:把下一个进程Y的threat.ip压入Y进程的栈里面。其中[next_ip]如上所示,在第67行。
看第49行:跳转到__swap_to
看第51行:Y进程里面出栈操作,放到ebp里面。
看第52行:把Y进程里面的出栈,弹出flags
第51,52行正好和第42,43行操作互逆。
三、实验:用gdb跟踪分析一个schedule()函数
四、Linux系统的一般执行过程
最一般情况:正在运行的用户态进程X切换到运行用户态进程Y的过程
1.正在运行的用户态进程X
2.发生中断——save cs:eip/esp/eflags(current) to kernel stack,then load cs:eip(entry of a specific ISR) and ss:esp(point to kernel stack).
3. SAVE_ALL //保存现场
4. 中断处理过程中或中断返回前调用了schedule(),其中的switch_to做了关键的进程上下文切换
5. 标号1之后开始运行用户态进程Y(这里Y曾经通过以上步骤被切换出去过因此可以从标号1继续执行)
6. restore_all //恢复现场
7. iret - pop cs:eip/ss:esp/eflags from kernel stack
8. 继续运行用户态进程Y
几种特殊的情况:
1. 通过中断处理过程中的调度时机,用户态进程与内核线程之间互相切换和内核线程之间互相切换,与最一般的情况非常类似,只是内核线程运行过程中发生中断没有进程用户态和内核态的转换;
2. 内核线程主动调用schedule(),只有进程上下文的切换,没有发生中断上下文的切换,与最一般的情况略简略;
3. 创建子进程的系统调用在子进程中的执行起点及返回用户态,如fork;
4. 加载一个新的可执行程序后返回到用户态的情况,如execve;
Linux进程调度与切换的更多相关文章
- Linux内核分析:实验八--Linux进程调度与切换
刘畅 原创作品转载请注明出处 <Linux内核分析>MOOC课程http://mooc.study.163.com/course/USTC-1000029000 概述 这篇文章主要分析Li ...
- Linux进程调度和切换过程分析
内容: (1):从schedule()开始,几种不同类型的进程之间的调度选择;在相同类型的进程之间的调度选择算法 (2):从CPU的IP值的变化上,说明在switch_to宏执行后,执行分析 (3): ...
- Linux进程调度(3):进程切换分析
3.调度函数schedule()分析 当kernel/sched.c:sched_tick()执行完,并且时钟中断返回时,就会调用kernel/sched.c:schedule()完成进程切换.我们 ...
- Linux进程调度原理
Linux进程调度原理 Linux进程调度机制 Linux进程调度的目标 1.高效性:高效意味着在相同的时间下要完成更多的任务.调度程序会被频繁的执行,所以调度程序要尽可能的高效: 2.加强交互性能: ...
- Linux进程调度原理【转】
转自:http://www.cnblogs.com/zhaoyl/archive/2012/09/04/2671156.html Linux进程调度的目标 1.高效性:高效意味着在相同的时间下要完成更 ...
- 深度讲解Linux内存管理和Linux进程调度-打通任督二脉
我在多年的工程生涯中发现很多工程师碰到一个共性的问题:Linux工程师很多,甚至有很多有多年工作经验,但是对一些关键概念的理解非常模糊,比如不理解CPU.内存资源等的真正分布,具体的工作机制,这使得他 ...
- [转载]Linux进程调度原理
[转载]Linux进程调度原理 Linux进程调度原理 Linux进程调度的目标 1.高效性:高效意味着在相同的时间下要完成更多的任务.调度程序会被频繁的执行,所以调度程序要尽可能的高效: 2.加强交 ...
- 【原创】(五)Linux进程调度-CFS调度器
背景 Read the fucking source code! --By 鲁迅 A picture is worth a thousand words. --By 高尔基 说明: Kernel版本: ...
- Linux启动界面切换:图形界面-字符界面(转)
Linux字符界面切换到图形界面 由字符界面切换到图形界面可用两种简单方法实现: 1.在字符界面输入startx或init 5 . 2.通过编辑/etc/inittab文件实现默认进入图形界面. 把其 ...
随机推荐
- ORA-16018: cannot use LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n or DB_RECOVERY_FILE_DEST【error收集】
之前一直没有注意一个事情, 关于设置archive归档路径设置的问题. 设置数据库为归档模式的命令: 1.首先要切换到mount状态: 2.执行alter system archivelog; 3.查 ...
- Helloworld和程序员人生
转:Helloworld和程序员人生 高中时期 10 PRINT "HELLO WORLD" 20 END 大学新生 program Hello(input, output) be ...
- 前端要怎么学createjs!!!???
前端想做js开发,可以.但是思维要变通,思维要重塑.为啥?因为被div+css坑的有点深.这些都是我自己总结的,不知道其他人是不是这样. 在我看来div+css算是开发吗?肯定不是,这些东西有难的东西 ...
- jquery keypress事件浏览器兼容性
今天在做“财务管理系统”的时候,使用jquery的ajax从前台传递用户输入到后台,并保存到数据库,但是在前台为了界面的简介和一致性,没有使用按 钮来实现"确定"和"取消 ...
- nginx_笔记分享_php-fpm详解
参考 http://syre.blogbus.com/logs/20092011.htmlhttp://www.mike.org.cn/articles/what-is-cgi-fastcgi-php ...
- 关于 Delphi 中的Sender和易混淆的概念(转)
/////////////////////////////////////////////////////// Delphi 中Sender对象的定义///////////////////////// ...
- Linux_service cloudera-scm-server start failed
see log : /var/log/cloudera-scm-server/cloudera-scm-server.log
- 小脚本一则---CDH的批量部署中,如果是从ESXI的VCENTER的模板生成的虚拟机,如何快速搞定网络网络卡配置?
当然,在作模板的过程中,我们除了要定义好SELINUX,IPTABLES之后, HOSTS文件维护,用ZOOKEEPER还是RSYNC实现? 都要在前期好好规划.. 脚本如下,一般改成自己的就可以用. ...
- 介绍PS大局观很不错的转文
http://blog.chinaunix.net/uid-20535506-id-1931615.html PowerShell初探 PowerShell的一些特点: ü 内含上百种 ...
- Android特效 五种Toast详解
Toast是Android中用来显示显示信息的一种机制,和Dialog不一样的是,Toast是没有焦点的,而且Toast显示的时间有限,过一定的时间就会自动消失.而且Toast主要用于向用户显示提示消 ...