From: Con Kolivas <kernel@kolivas.org>

This patch optimises the smt-nice branch points

The first hunk removes one unnecessary if() and rearranges the order
from=20 least to most likely.

The second hunk improves the "reschedule sibling task" logic substantially
by only rescheduling it if it is supposed to be put to sleep as well.  This
causes far less context switching of low priority tasks.

Consequently the benchmark results are substantial:

up is uniprocessor
mm1 is before the smt nice patch
sn is with smt nice patch
opt is with this optimise patch

Time is in seconds

Concurrent kernel compiles, one make, the other nice +19 make
		Nice0	Nice19
up		183	235
mm1		208	211
sn		180	237
opt		178	222

As can be seen the original patch simply changed the performance to that of
running in uniprocessor when there was a nice difference.  With this patch
the overall throughput is improved compared to up as is desired by smt
processing.


---

 25-akpm/kernel/sched.c |   14 ++++++--------
 1 files changed, 6 insertions(+), 8 deletions(-)

diff -puN kernel/sched.c~sched-smt-nice-optimisation kernel/sched.c
--- 25/kernel/sched.c~sched-smt-nice-optimisation	2004-03-17 11:29:18.191445224 -0800
+++ 25-akpm/kernel/sched.c	2004-03-17 11:29:18.197444312 -0800
@@ -1972,11 +1972,9 @@ static inline int dependent_sleeper(runq
 		 * task from using an unfair proportion of the
 		 * physical cpu's resources. -ck
 		 */
-		if (p->mm && smt_curr->mm && !rt_task(p) &&
-			((p->static_prio > smt_curr->static_prio &&
-			(smt_curr->time_slice * (100 - sd->per_cpu_gain) /
-			100) > task_timeslice(p)) ||
-			rt_task(smt_curr)))
+		if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) / 100) >
+			task_timeslice(p) || rt_task(smt_curr)) &&
+			p->mm && smt_curr->mm && !rt_task(p))
 				ret |= 1;
 
 		/*
@@ -1984,9 +1982,9 @@ static inline int dependent_sleeper(runq
 		 * or wake it up if it has been put to sleep for priority
 		 * reasons.
 		 */
-		if ((smt_curr != smt_rq->idle &&
-			smt_curr->static_prio > p->static_prio) ||
-			(rt_task(p) && !rt_task(smt_curr)) ||
+		if ((((p->time_slice * (100 - sd->per_cpu_gain) / 100) >
+			task_timeslice(smt_curr) || rt_task(p)) &&
+			smt_curr->mm && p->mm && !rt_task(smt_curr)) ||
 			(smt_curr == smt_rq->idle && smt_rq->nr_running))
 				resched_task(smt_curr);
 	}

_