Главная страница


ru.unix.bsd

 
 - RU.UNIX.BSD ------------------------------------------------------------------
 From : Slawa Olhovchenkov                   2:5030/500     11 Jan 2007  03:59:26
 To : All
 Subject : Вести с полей
 -------------------------------------------------------------------------------- 
 
 
 From: lulf@stud.ntnu.no
 Subject: Pluggable Disk Schedulers in GEOM
 
 Hi,
 
 I was wondering if someone have started on the pluggable
 disk-scheduler project
 on the "new ideas"-page yet.
 
 I was thinking on how one could implement this in GEOM by creating a
 lightweight
 scheduler API/Framework integrated in GEOM. The framework would be in
 charge of
 changing which schedulers that are to be used by the g_up and g_down threads.
 
 I've put down some design goals for this:
 1. Little/no overhead in I/O processing with default scheduling
 compared to the "old" way.
 2. Easily modifiable, preferable on-the-fly switching of schedulers.
 3. Make it possible to many different schedulers to be implemented, without
   creating a too alien interface too them, but at the same time not restrict
   them too much.
 
 More specifically my plan was to change the
 g_up_procbody/g_down_procbody to ask the scheduler
 framework on which scheduler to use, and then further implement
 procedures in that
 framework to handle the details of loading, switching and unloading
 different schedulers
 for I/O. Then I would extract out the default I/O scheduler and try out some
 other ways to schedule I/O. Also, I'm not sure how I would handle each
 schedulers way to organize the queue. One should allow for different types
 of bioq's for the schedulers since they may have different needs of
 organizing
 queues (like a heap maybe).
 
 I've started with some of my tampering in a p4 branch lulf_gpds. I have a
 DESCRIPTION document that would maybe explain some of my thoughts and
 problems
 further. Some small code are written, but I want to hear some others
 thoughts on this before I go crashing around doing stuff I might hate
 later I did :)
 
 I was also thinking of an alternative way to implement this like a
 "gpds"-layer that could provide different schedulers to service I/O requests,
 because that would make it to fine-grain more on scheduling, say telling that
 the system-drive is used in a characteristic way and that one specific
 scheduler algorithm is more
 appropriate there, and another drive is having a different
 characteristic which then should use a different algorithm.
 However, this should be doable directly in geom as previously described, but
 with a bit more tampering with other code. This is probably the most
 efficient
 way since it has no overhead of another GEOM class.
 
 I also have some questions about the GEOM layer in itself. Does the
 VM-manager
 actually swap pages out to disk via GEOM, or does it do that by itself (which
 would make more sense in terms of efficiency).
 
 I'd like to hear from some of the GEOM gurus' view on this.
 Is the something that sounds doable and worth spending time on?
 Is there something I've overlooked? Have I completely lost my mind?
 I sometimes have the ability to write a bit different than what my mind is
 thinking sometimes :)
 
 Anyway, I'd like to research a bit on this topic to just see how much
 it does matter with different I/O scheduling for different purposes.
 
 Comments are welcomed!
 ... Что бы различать оттенки дерьма надо быть гурманом.
 --- GoldED+/BSD 1.1.5
  * Origin:  (2:5030/500)
 
 

Вернуться к списку тем, сортированных по: возрастание даты  уменьшение даты  тема  автор 

 Тема:    Автор:    Дата:  
 Вести с полей   Slawa Olhovchenkov   11 Jan 2007 03:59:26 
Архивное /ru.unix.bsd/222145a57e2e.html, оценка 1 из 5, голосов 10
Яндекс.Метрика
Valid HTML 4.01 Transitional