Re: Große IDE-Platten

From: Oliver Fromme <olli(at)dorifer.heim3.tu-clausthal.de>
Date: Fri, 7 Jan 2000 00:03:35 +0100 (CET)

Georg Graf <georg-dfbsd@[212.17.119.140]> wrote in list.de-bsd-questions:
> (Ich werde nur wenige files haben, andere Maschinen machen ihr dump
> auf diesen Rechner).

Nur ein kleiner Tip am Rande: In dem Fall solltest Du beim
newfs auch -i mit einem geeignet hohen Wert angeben (128K würde
ich empfehlen, maximal 256K, mehr ist eher kontraproduktiv).

Was -b und -f betrifft, hier ein Ausschnitt aus Postings von
Matt Dillon zu dem Thema:

> From: Matthew Dillon <dillon(at)apollo.backplane.com>
> Newsgroups: list.freebsd-hackers
> Subject: Re: Non-standard FFS parameters
> Date: 7 Oct 1999 04:02:06 +0200
> [...]
> :> There may be problems specifying larger block sizes, though nothing
> :> that we can't fix.
> :
> :What kind of problems? Will it simply not work, or will it corrupt the
> :FS?
>
> Well, the kernel itself has a 256KB block size limit. The types of
> problems that will occur with large block sizes are mostly going to
> be related to the fact that the buffer cache is not tuned to deal
> with large block sizes, not even in -current. So it will not be
> very efficient. Also, caching large blocks creates inefficiencies in
> the VM system because the VM system likes to cache page-sized chunks
> (i.e. 4K on i386). The buffer cache is much less efficient dealing
> with large buffers which have had holes poked into them due to the VM
> caching algorithms.
>
> The disks will not be able to transfer file data any faster using large
> blocks verses the default, so beyond a certain point the performance
> simply stops improving.
>
> I would recommend a 16K or 32K block size and the only real reason for
> doing it that way is to reduce the number of indirect blockmap blocks
> required to maintain the file.

> From: Matthew Dillon <dillon(at)apollo.backplane.com>
> Newsgroups: list.freebsd-hackers
> Subject: Re: Non-standard FFS parameters
> Date: 7 Oct 1999 19:32:27 +0200
> [...]
> :Running bonnie on the filesystem with these parameters results in
> :unkillable process sitting in getblk (it's the first phase of bonnie test
> :when they use putc() to create the file). It just sits there and doesn't
> :consume CPU. The OS is 3.3-R.
>
> Hmmm. It's quite possible, 3.x's getnewbuf() code is pretty nasty. I
> have a solution under test for 4.x (current). There simply may not be
> anything that can be done for 3.x short of porting current's getnewbuf()
> code over, and doing so has been deemed too risky by DG due to all the
> collateral porting that would also have to be done. I agree with that
> assessment, plus it's a huge amount of work that I don't have time to do
> at this late date.
>
> Try using a smaller block size, like 16K. If that doesn't work then just
> stick with 8K I guess. The kernel's clustering code should still make it
> reasonably efficient.

Gruß
   Olli

-- 
Oliver Fromme, Leibnizstr. 18/61, 38678 Clausthal, Germany
(Info: finger userinfo:olli(at)dorifer.heim3.tu-clausthal.de)
"In jedem Stück Kohle wartet ein Diamant auf seine Geburt"
                                         (Terry Pratchett)
To Unsubscribe: send mail to majordomo(at)de.FreeBSD.org
with "unsubscribe de-bsd-questions" in the body of the message
Received on Fri 07 Jan 2000 - 00:03:48 CET

search this site