Back to home page

OSCL-LXR

 
 

    


0001 ==============================
0002 Deadline IO scheduler tunables
0003 ==============================
0004 
0005 This little file attempts to document how the deadline io scheduler works.
0006 In particular, it will clarify the meaning of the exposed tunables that may be
0007 of interest to power users.
0008 
0009 Selecting IO schedulers
0010 -----------------------
0011 Refer to Documentation/block/switching-sched.rst for information on
0012 selecting an io scheduler on a per-device basis.
0013 
0014 ------------------------------------------------------------------------------
0015 
0016 read_expire     (in ms)
0017 -----------------------
0018 
0019 The goal of the deadline io scheduler is to attempt to guarantee a start
0020 service time for a request. As we focus mainly on read latencies, this is
0021 tunable. When a read request first enters the io scheduler, it is assigned
0022 a deadline that is the current time + the read_expire value in units of
0023 milliseconds.
0024 
0025 
0026 write_expire    (in ms)
0027 -----------------------
0028 
0029 Similar to read_expire mentioned above, but for writes.
0030 
0031 
0032 fifo_batch      (number of requests)
0033 ------------------------------------
0034 
0035 Requests are grouped into ``batches`` of a particular data direction (read or
0036 write) which are serviced in increasing sector order.  To limit extra seeking,
0037 deadline expiries are only checked between batches.  fifo_batch controls the
0038 maximum number of requests per batch.
0039 
0040 This parameter tunes the balance between per-request latency and aggregate
0041 throughput.  When low latency is the primary concern, smaller is better (where
0042 a value of 1 yields first-come first-served behaviour).  Increasing fifo_batch
0043 generally improves throughput, at the cost of latency variation.
0044 
0045 
0046 writes_starved  (number of dispatches)
0047 --------------------------------------
0048 
0049 When we have to move requests from the io scheduler queue to the block
0050 device dispatch queue, we always give a preference to reads. However, we
0051 don't want to starve writes indefinitely either. So writes_starved controls
0052 how many times we give preference to reads over writes. When that has been
0053 done writes_starved number of times, we dispatch some writes based on the
0054 same criteria as reads.
0055 
0056 
0057 front_merges    (bool)
0058 ----------------------
0059 
0060 Sometimes it happens that a request enters the io scheduler that is contiguous
0061 with a request that is already on the queue. Either it fits in the back of that
0062 request, or it fits at the front. That is called either a back merge candidate
0063 or a front merge candidate. Due to the way files are typically laid out,
0064 back merges are much more common than front merges. For some work loads, you
0065 may even know that it is a waste of time to spend any time attempting to
0066 front merge requests. Setting front_merges to 0 disables this functionality.
0067 Front merges may still occur due to the cached last_merge hint, but since
0068 that comes at basically 0 cost we leave that on. We simply disable the
0069 rbtree front sector lookup when the io scheduler merge function is called.
0070 
0071 
0072 Nov 11 2002, Jens Axboe <jens.axboe@oracle.com>