tdev=hda1 nice badblocks -p 99 -c 90000 -nv /dev/$tdev 2>&1 | tee -a /tmp/badblocks.$tdev.log &
tdev=hda1 nice badblocks -p 99 -c 90000 -nv /dev/$tdev 2>&1 | tee -a /tmp/badblocks.$tdev.log &
tdev=hda2 nice badblocks -p 99 -c 90000 -wv /dev/$tdev 2>&1 | tee -a /tmp/badblocks.$tdev.log &
$ jot - 900000 100000 -50000 | xargsi -t badblocks -c {} -nv /dev/hda [...] badblocks -c 150000 -nv /dev/hda badblocks: Cannot allocate memory while allocating buffers badblocks -c 100000 -nv /dev/hda Initializing random test data Checking for bad blocks in non-destructive read-write mode [...]
By default only a non-destructive read-only test is done.
-n Use non-destructive read-write mode.
-w Use write-mode test. With this option, badblocks scans for bad blocks by writing some patterns (0xaa, 0x55, 0xff, 0x00) on every block of the device, reading every block and comparing the contents. This option may not be combined with the -n option, as they are mutually exclusive.
Badblocks needs memory proportional to the number of blocks tested at once, in read-only mode, proportional to twice that number in read-write mode (NB, might not be true. I noticed that the memory requirement is constant, as in e2fsprogs-1.27-9. Tong), and proportional to three times that number in non-destructive read-write mode.
If you set the number-of-blocks parameter (-c) to too high a value, badblocks will exit almost immediately with an out-of-memory error "while allocating buffers" in non-destructive read-write mode.
If you set it too low, however, for a non-destructive-write-mode test, then it's possble for questionable blocks on an unreliable hard drive to be hidden by the effects of the hard disk track buffer.
-p num_passes Repeat scanning the disk until there are no new blocks discov- ered in num_passes consecutive scans of the disk. Default is 0, meaning badblocks will exit after the first pass.
-v Verbose mode.
With drives disappearing/reappearing I'd be tempted to say you have a pending drive failure, or other hardware problem.
Run dmesg and look for disk errors with something like this:
dmesg | grep hd
Steve
I agree with Steve, above, about pending (or existing) disk failures. You might want to look into the smartmontools suite as well. You can schedule tests for your disks, and query them directly about their status.
write back with any info from smartctl when you've got it.
to run a bunch of short tests on all your disks, just do:
for disk in /dev/hd? ; do smartctl -t short "$disk" done
There should be some messages printed about what time you should expect the tests to complete. once the tests have completed (probably 5 minutes max for most short tests, but depends on the disks), you can read the info with:
for disk in /dev/hd? ; do echo "===${disk}===" smartctl -a "$disk" done > diskreports.txt
You should then be able to read the diskreports.txt file to see what the disks have to say for themselves.
For a single disk, of course, the commands are even simpler:
smartctl -t short /dev/hdX ## wait until the suggested time smartctl -a /dev/hdX
documented on: 29 Sep 2007, dkg
% smartctl -t short /dev/sda; sleep 2m; printf '\a' smartctl version 5.37 [i686-pc-linux-gnu] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/
Sending command: "Execute SMART Short self-test routine immediately in off-line mode". Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful. Testing has begun. Please wait 2 minutes for test to complete. Test will complete after Sat Mar 29 15:46:13 2008
Use smartctl -X to abort test.
% smartctl -a /dev/sda
Short self-test routine recommended polling time: ( 2) minutes.
SMART Error Log Version: 1 No Errors Logged
SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 27420 -
documented on: 2008-03-29
Solve the problem why smartd daemon failed to start.
% /etc/init.d/smartmontools start Starting S.M.A.R.T. daemon: smartd failed!
The device /dev/sda is actually SATA disk.
Need to add "-d ata" to /etc/smartd.conf
Although it is OK to omit the '-d ata' switch from 'smartctl' command:
% smartctl -i -d ata /dev/sda Model Family: Maxtor DiamondMax 10 family (ATA/133 and SATA/150) Device Model: Maxtor 6B200M0 Serial Number: B405M10H Firmware Version: BANC1B10 User Capacity: 203,928,109,056 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 0 Local Time is: Sat Mar 29 16:25:18 2008 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled
% smartctl -i /dev/sda Model Family: Maxtor DiamondMax 10 family (ATA/133 and SATA/150) Device Model: Maxtor 6B200M0 Serial Number: B405M10H Firmware Version: BANC1B10 User Capacity: 203,928,109,056 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 0 Local Time is: Sat Mar 29 16:25:18 2008 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled
in daemon config file, the '-d ata' switch is still necessary.
# change /etc/default/smartmontools
$ diff -wU 1 /etc/default/smartmontools.org /etc/default/smartmontools --- /etc/default/smartmontools.org 2008-03-29 15:56:43.000000000 -0400 +++ /etc/default/smartmontools 2008-03-29 15:57:57.000000000 -0400 @@ -8,3 +8,3 @@ # uncomment to start smartd on system startup -#start_smartd=yes +start_smartd=yes
# try start the daemon % /etc/init.d/smartmontools start Starting S.M.A.R.T. daemon: smartd failed!
# probe error % tail /var/log/messages /var/log/daemon.log ==> /var/log/daemon.log <== Mar 29 16:11:11 cxmr smartd[10621]: smartd version 5.37 [i686-pc-linux-gnu] Copyright (C) 2002-6 Bruce Allen Mar 29 16:11:11 cxmr smartd[10621]: Home page is http://smartmontools.sourceforge.net/ Mar 29 16:11:11 cxmr smartd[10621]: Opened configuration file /etc/smartd.conf Mar 29 16:11:11 cxmr smartd[10621]: Configuration file /etc/smartd.conf parsed. Mar 29 16:11:11 cxmr smartd[10621]: Device: /dev/sda, opened Mar 29 16:11:11 cxmr smartd[10621]: Device /dev/sda: ATA disk detected behind SAT layer Mar 29 16:11:11 cxmr smartd[10621]: Try adding '-d sat' to the device line in the smartd.conf file. Mar 29 16:11:11 cxmr smartd[10621]: For example: '/dev/sda -a -d sat' Mar 29 16:11:11 cxmr smartd[10621]: Unable to register SCSI device /dev/sda at line 23 of file /etc/smartd.conf Mar 29 16:11:11 cxmr smartd[10621]: Unable to register device /dev/sda (no Directive -d removable). Exiting.
# change /etc/smartd.conf $ diff -wU 1 /etc/smartd.conf.org /etc/smartd.conf --- /etc/smartd.conf.org 2007-04-05 04:36:42.000000000 -0400 +++ /etc/smartd.conf 2008-03-29 16:12:24.000000000 -0400 @@ -21,3 +21,4 @@ # list the devices that they wish to monitor. -DEVICESCAN -m root -M exec /usr/share/smartmontools/smartd-runner +#DEVICESCAN -m root -M exec /usr/share/smartmontools/smartd-runner + /dev/sda -a -o on -S on -s (S/../.././04|L/../../6/04) -I 194 -m root -d ata
# try start the daemon % /etc/init.d/smartmontools start Starting S.M.A.R.T. daemon: smartd.
% tail /var/log/daemon.log Mar 29 16:12:32 cxmr smartd[10720]: Device: /dev/sda, enable SMART Automatic Offline Testing failed. Mar 29 16:12:32 cxmr smartd[10720]: Device: /dev/sda, is SMART capable. Adding to "monitor" list. Mar 29 16:12:32 cxmr smartd[10720]: Monitoring 1 ATA and 0 SCSI devices Mar 29 16:12:32 cxmr smartd[10720]: Device: /dev/sda, Failed SMART usage Attribute: 194 Temperature_Celsius. Mar 29 16:12:32 cxmr smartd[10720]: Sending warning via mail to root ... Mar 29 16:12:33 cxmr smartd[10720]: Warning via mail to root: successful Mar 29 16:12:33 cxmr smartd[10738]: smartd has fork()ed into background mode. New PID=10738. Mar 29 16:12:33 cxmr smartd[10738]: file /var/run/smartd.pid written containing PID 10738
documented on: 2008-03-29
Newsgroups: gmane.linux.debian.user Date: Mon, 31 Mar 2008
> I saw the following for the first time when I rebooted just now: > Mar 31 09:10:04 cxmr kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen > Mar 31 09:10:04 cxmr kernel: ata1.00: cmd b0/d2:f1:00:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 123392 in > Mar 31 09:10:04 cxmr kernel: res 50/00:f1:00:4f:c2/00:00:00:00:00/00 Emask 0x202 (HSM violation) > Mar 31 09:10:04 cxmr kernel: ata1: soft resetting port > Mar 31 09:10:04 cxmr kernel: ata1.00: configured for UDMA/133 > Mar 31 09:10:04 cxmr kernel: ata1: EH complete > Mar 31 09:10:04 cxmr kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen > Mar 31 09:10:04 cxmr kernel: ata1.00: cmd b0/d2:f1:00:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 123392 in > Mar 31 09:10:04 cxmr kernel: res 50/00:f1:00:4f:c2/00:00:00:00:00/00 Emask 0x202 (HSM violation) > Mar 31 09:10:04 cxmr kernel: ata1: soft resetting port > Mar 31 09:10:04 cxmr kernel: ata1.00: configured for UDMA/133 > Mar 31 09:10:04 cxmr kernel: ata1: EH complete > > It repeated several times after. What does it mean?
Doesn't look good whatever it is. Hope you have a good reliable backup.
> FYI, my box experiences sudden freeze and lock up recently so I enabled my > smart monitor. In fact the reason for the reboot is that the system locked > up entirely. It all goes like this, I didn't do anything, and it freezes.
This doesn't sound good either.
> BTW, I am still not quite sure what will happen when I enabled smartd. Do > I get report from cron, or I have to pull it myself from time to time?
See man smartctl. You run a '-t long' test on the drive which will tell you how long the test will take. Wait at least that long and use smartctl to check the results. Ideally "completed without error" but you will also get a list of all smart parameter values so you can see how things are going.
NB: if SMART says that the drive is failing believe it. If SMART says that the drive is fine, look further. Check the drive temp, listen to it, watch those errors. Given those errors, I'd be checking the warranty on the drive.
Doug.
Douglas A. Tutty @porchlight.ca
Newsgroups: gmane.linux.debian.user Date: Sun, 06 Mar 2005 17:42:43 +0100
> After a string of hard drive failures I've been trying to monitor my > drives more carefully. I had ide-smart run the offline tests and got > results. Can anyone shed some light on how they should be > interpreted? For example, in > > Id=202 Status=10 {Advisory Online } Value=253 Threshold= 0 Passed > Id=203 Status=11 {Prefailure Online } Value=253 Threshold=180 Passed > > I think I should read the column which says 'Advisory' or 'Prefailure' > as a description of the test, not the result. In which case, they > passed so I shouldn't worry. But I could also interpret the second > line as saying "The drive passes now but is about to fail". Is either > correct?
Don't know about ide-smart, but I've had drive issues too and use smartmontools now. I configured the demon to issue short self tests daily and a long one once a week with an entry like this in /etc/smartd.conf:
/dev/hda -a -o on -S on -s (S/../.././04|L/../../6/04) -I 194 -m root
Apart from that, i have a cronjob to daily mail me the output of smartctl -a /dev/hda
That output contains all the drive status params smart can gather, like
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 065 065 000 Pre-fail Always - 6016 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 18 ....
Now, here you'll want to watch the Pre-fail attribute counts, since, as their type suggests, they tend to (rapidly) increase before a failure. In all my disk issues this has been particularly true for Raw_Read_Error_Rate, so I watch that one closely. Small values like 3 or 10 are OK, but rapid increase over a couple of days to hundreds or thousands means the drive is about to die, typically during the next one or two days. Single blocks may already be unreadable at that point, so backup the drive immediately.
Note that even in that case, smart might still say the drive PASSED the self test, so a PASS should not really comfort you. It definitely makes sense to check the status attributes yourself.
Bruno Hertz
Extracted from
To understand how smartmontools works, it's helpful to know the history of SMART. The original SMART spec (SFF-8035i) was written by a group of disk drive manufacturers. In Revision 2 (April 1996) disks keep an internal list of up to 30 Attributes corresponding to different measures of performance and reliability, such as read and seek error rates.
Each Attribute has a one-byte normalized value ranging from 1 to 253 and a corresponding one-byte threshold. If one or more of the normalized Attribute values less than or equal to its corresponding threshold, then either the disk is expected to fail in less than 24 hours or it has exceeded its design or usage lifetime. Some of the Attribute values are updated as the disk operates. Others are updated only through off-line tests that temporarily slow down disk reads/writes and, thus, must be run with a special command. In late 1995, parts of SFF-8035i were merged into the ATA-3 standard.
To make use of these disk features, you need to know how to use smartmontools to examine the disk's Attributes, query the disk's health status, run disk self-tests, examine the disk's self-test log (results of the last 21 self-tests) and examine the disk's ATA error log (details of the last five disk errors).
To begin, give the command
smartctl -a /dev/hda
If SMART is not enabled on the disk, you first must enable it with the -s on option. You then see output similar to the output shown in Listings 1-5.
The first part of the output (Listing 1) lists model/firmware information about the disk — this one is an IBM/Hitachi GXP-180 example. Smartmontools has a database of disk types. If your disk is in the database, it may be able to interpret the raw Attribute values correctly.
Device Model: IC35L120AVV207-0 Serial Number: VNVD02G4G3R72G Firmware Version: V24OA63A Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 3a SMART support is: Available - device has SMART capability. SMART support is: Enabled
The second part of the output (Listing 2) shows the results of the health status inquiry. This is the one-line Executive Summary Report of disk health; the disk shown here has passed. If your disk health status is FAILING, back up your data immediately. The remainder of this section of the output provides information about the disk's capabilities and the estimated time to perform short and long disk self-tests.
SMART overall-health self-assessment test result: PASSED
General SMART Values: Off-line data collection status: (0x82) Offline data collection activity was completed without error. Auto Off-line Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete off-line data collection: (2855) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Automatic timer ON/OFF support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 48) minutes.
The third part of the output (Listing 3) lists the disk's table of up to 30 Attributes (from a maximum set of 255). Remember that Attributes are no longer part of the ATA standard, but most manufacturers still support them. Although SFF-8035i doesn't define the meaning or interpretation of Attributes, many have a de facto standard interpretation. For example, this disk's 13th Attribute (ID #194) tracks its internal temperature.
Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 060 Pre-fail Always - 0 2 Throughput_Performance 0x0005 155 155 050 Pre-fail Offline - 225 3 Spin_Up_Time 0x0007 097 097 024 Pre-fail Always - 293 (Average 270) 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 125 125 020 Pre-fail Offline - 36 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 3548 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 100 100 050 Old_age Always - 158 193 Load_Cycle_Count 0x0012 100 100 050 Old_age Always - 158 194 Temperature_Celsius 0x0002 189 189 000 Old_age Always - 29 (Lifetime Min/Max 23/33) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0
Studies have shown that lowering disk temperatures by as little as 5^C significantly reduces failure rates, though this is less of an issue for the latest generation of fluid-drive bearing drives. One of the simplest and least expensive steps you can take to ensure disk reliability is to add a cooling fan that blows cooling air directly onto or past the system's disks.
Each Attribute has a six-byte raw value (RAW_VALUE) and a one-byte normalized value (VALUE). In this case, the raw value stores three temperatures: the disk's temperature in Celsius (29), plus its lifetime minimum (23) and maximum (33) values. The format of the raw data is vendor-specific and not specified by any standard. To track disk reliability, the disk's firmware converts the raw value to a normalized value ranging from 1 to 253. If this normalized value is less than or equal to the threshold (THRESH), the Attribute is said to have failed, as indicated in the WHEN_FAILED column. The column is empty because none of these Attributes has failed. The lowest (WORST) normalized value also is shown; it is the smallest value attained since SMART was enabled on the disk. The TYPE of the Attribute indicates if Attribute failure means the device has reached the end of its design life (Old_age) or it's an impending disk failure (Pre-fail). For example, disk spin-up time (ID #3) is a prefailure Attribute. If this (or any other prefail Attribute) fails, disk failure is predicted in less than 24 hours.
The names/meanings of Attributes and the interpretation of their raw values is not specified by any standard. Different manufacturers sometimes use the same Attribute ID for different purposes. For this reason, the interpretation of specific Attributes can be modified using the -v option to smartctl; please see the man page for details. For example, some disks use Attribute 9 to store the power-on time of the disk in minutes; the -v 9,minutes option to smartctl correctly modifies the Attribute's interpretation. If your disk model is in the smartmontools database, these -v options are set automatically.
The next part of the smartctl -a output (Listing 4) is a log of the disk errors. This particular disk has been error-free, and the log is empty. Typically, one should worry only if disk errors start to appear in large numbers. An occasional transient error that does not recur usually is benign. The smartmontools Web page has a number of examples of smartctl -a output showing some illustrative error log entries. They are timestamped with the disk's power-on lifetime in hours when the error occurred, and the individual ATA commands leading up to the error are timestamped with the time in milliseconds after the disk was powered on. This shows whether the errors are recent or old.
SMART Error Log Version: 1 No Errors Logged
The final part of the smartctl output (Listing 5) is a report of the self-tests run on the disk. These show two types of self-tests, short and long. (ATA-6/7 disks also may have conveyance and selective self-tests.) These can be run with the commands smartctl -t short /dev/hda and smartctl -t long /dev/hda and do not corrupt data on the disk. Typically, short tests take only a minute or two to complete, and long tests take about an hour. These self-tests do not interfere with the normal functioning of the disk, so the commands may be used for mounted disks on a running system. On our computing cluster nodes, a long self-test is run with a cron job early every Sunday morning. The entries in Listing 5 all are self-tests that completed without errors; the LifeTime column shows the power-on age of the disk when the self-test was run. If a self-test finds an error, the Logical Block Address (LBA) shows where the error occurred on the disk. The Remaining column shows the percentage of the self-test remaining when the error was found. If you suspect that something is wrong with a disk, I strongly recommend running a long self-test to look for problems.
SMART Self-test log, version number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended off-line Completed 00% 3525 - # 2 Extended off-line Completed 00% 3357 - # 3 Short off-line Completed 00% 3059 -
The smartctl -t offline command can be used to carry out off-line tests. These off-line tests do not make entries in the self-test log. They date back to the SFF-8035i standard, and update values of the Attributes that are not updated automatically under normal disk operation (see the UPDATED column in Listing 3). Some disks support automatic off-line testing, enabled by smartctl -o on, which automatically runs an off-line test every few hours.
The smartd daemon does regular monitoring for you. It monitors the disk's SMART data for signs of problems. It can be configured to send e-mail to users or system administrators or to run arbitrary scripts if problems are detected. By default, when smartd is started, it registers the system's disks. It then checks their status every 30 minutes for failing Attributes, failing health status or increased numbers of ATA errors or failed self-tests and logs this information with SYSLOG in /var/log/messages by default.
You can control and fine-tune the behavior of smartd using the configuration file /etc/smartd.conf. This file is read when smartd starts up, before it forks into the background. Each line contains Directives pertaining to a different disk. The configuration file on our computing cluster nodes look like this:
# /etc/smartd.conf config file /dev/hda -S on -o on -a -I 194 -m sense@phys.uwm.edu /dev/hdc -S on -o on -a -I 194 -m sense@phys.uwm.edu
The first column indicates the device to be monitored. The -o on Directive enables the automatic off-line testing, and the -S on Directive enables automatic Attribute autosave. The -m Directive is followed by an e-mail address to which warning messages are sent, and the -a Directive instructs smartd to monitor all SMART features of the disk. In this configuration, smartd logs changes in all normalized attribute values. The -I 194 Directive means ignore changes in Attribute #194, because disk temperatures change often, and it's annoying to have such changes logged on a regular basis.
Normally smartd is started by the normal UNIX init mechanism. For example, on Red Hat distributions, /etc/rc.d/init.d/smartd start and /etc/rc.d/init.d/smartd stop can be used to start and stop the daemon.
Further information about the smartd and its config file can be found in the man page (man smartd), and summaries can be found with the commands smartd -D and smartd -h. For example, the -M test Directive sends a test e-mail warning message to confirm that warning e-mail messages are delivered correctly. Other Directives provide additional flexibility, such as monitoring changes in raw Attribute values.
What should you do if a disk shows signs of problems? What if a disk self-test fails or the disk's SMART health status fails? Start by getting your data off the disk and on to another system as soon as possible. Second, run some extended disk self-tests and see if the problem is repeatable at the same LBA. If so, something probably is wrong with the disk. If the disk has failing SMART health status and is under warranty, the vendor usually will replace it. If the disk is failing its self-tests, many manufacturers provide specialized disk health programs, for example, Maxtor's PowerMax or IBM's Drive Fitness Test. Sometimes these programs actually can repair a disk by remapping bad sectors. Often, they report a special error code that can be used to get a replacement disk.
documented on: 2008-03-29
Newsgroups: comp.os.linux.hardware Date: 2001-07-29 12:14:54 PST
>>is there a soft under linux that allows to low level format a disk ? >Do you mean like 'way back in the old days where you formatted the >actual sectors and interleave? My understanding (possibly incorrect) >is that you can't do that with modern hard drives.
You do a low-level format on a floppy disk with "fdformat" or "superformat". Modern hard disks *can* have their sector headers, interleave factors, cylinder skew, and all that adjusted, but you have to get special software from the drive manufacturer, and the software generally is specific to a particular model of drive. This stuff must be put on floppy for obvious reasons, and either has its own miniature homebrew OS or uses DOS.
If you have a hard disk that's spitting out unrecoverable errors that have to do with bad blocks, you can buy some time by moving all the data you want to save somewhere else, running "badblocks" on the partition, then running "badblocks -o /tmp/somewhere" on the partition, then doing "mke2fs -l /tmp/somewhere" on the partition. This is a *temporary* solution; when a disk starts dying, it just gets worse and worse.
Matt G
> yes, i meant old fashioned bios format > i heard that it's the only way to get rid of bad clusters > and i've got an old disk (seagate 1.2GB) with a lot of them > so i'm looking for something to erase all and low level format it, but > nothing works correctly !!!! under windows i mean .... ;-)
Modern drives remap bad blocks all by themselves. If you (or the O/S) see bad blocks, then it's time to replace the drive. And if the drive is old enough that it doesn't remap bad blocks, it should be replaced.
But if you want to run diagnostics, check the vendor's web site. They probably have tools that you can download and run from a DOS floppy.
Howard Christeller
Newsgroups: comp.os.linux, comp.lang.asm.x86 Date: 2002-09-02 10:55:11 PST
>I looked for information about hard disk's low level format on various >groups (eg. comp.lang.asm.x86,comp.os.linux), but my search only took >me to posts like: "don4t do it", "it's hardware vendor specific", "the >new ide doesn4t make a real low level format". >Now, I will make a few questions that involves low level programming >and I like to see professional answers, not the simple answers like >the above mentioned. >Then, my questions are: >- How is implemented the Int13,function 5 actually for hard disks?
This is not fully done on new systems because the IDE interface does not use this information anymore.
The modern drives of the IDE/ATA variety actually use the LBA absolute sector number for everything, No matter how you address the drive.
To do the format like you are talking about you must first tell the drive to use your CHS cylinder, head, sector values.
Then you just go ahead and find the appropriate commands in the list IDE/ATA low level stuff and send it.
>- How is implemented the bad sector marking on a hard disk? (not the >FFF7h cluster mark in the FAT), but the real table management that >knows which physical sector is really bad, something like the "track >address fields" for a floppy disk format.
These are generically known as P&G tables. Permanent and gathered errors table or list.
This idea comes from SCSI which is where much of the modern improvments in IDE have come from. Approximatly 3% of the drive on a physical track by physical track basis is set aside to catch gathered errors as they occur.
In order to fully under stand this look at a modern drive size.
162000 * 16 * 63 done on an actual 2 platter or 4 Head system. (~80-Gig)
this means that you actually deal with the following.
648000 * 4 *63 is the first step.
Then you factor in the 6000Tpi of tracks per inch density. the track are the same as cylinders (concentric rings goinf outward.) You actually have about an inch of track width on a 3 1/2 drv. so you actually get something like.
6000 * 4 * 6804 is where you end up.
Now if it was me I would just use the 6144 (1024 * 6) as my SPT (Sectors per track) or about 3Megs per physical track on the media.
This illustrate the departure of the IDE/ATA technology from it's FM / MFM / RLL / ESDI roots. The sector address no longer represented the physical address of the sector. This now represented a virtual position and could be changed at will by the onboard electronics for data preservation. All commands to access the P&G tables are vendor specific.
>- How does the BIOS low level format work and which are the supported >disks (it means, vendor, capacity, etc).
Please see the above then add this to it.
Modern drives use a two layer approach to a format. The sector boundary's are no longer variable.
Now there is an inner layer of hard coded sectors that the onboard electronics can only read nor write. Then there is the soft coded parts that contain sector specific data. This can be CHS placement Information in addition to LBA address & sector data.
>- ... >I have other doubts, but first of all I want to know if there is >anybody that really can help me.
If you really want to get into the nitty gritty then I suggest you grab the BT168.zip off of my site and study it closely.
Messy though the code is, poorly documented it may be. How ever it runs spinrite. It also handles upto 128 Gig drives. in CHS upto 8-Gig and in LBA from there up.
Done myself from the ground up. When I get time I will be cleaning up the code and makeing some real documemtation for it.
Doors - Dont look at the future in a window. just walk to the door http://walk.to/doors - Open it and go there.
Newsgroups: comp.os.linux.announce Date: 1993-05-31 14:19:45 PST
here is a patch for the mtools to use fdformat for low-level formating disk's with the mformat command and to create a special bootselector as bootsector on the formated disk. There are some new options for mformat. Please read the man page for mformat.
There are 3 files in this mail:
1.) mboot.h included by mformat.c 2.) modified fdformat.c 3.) the context diff from mtools-2.05 (merged)
vy73 Birko
http://www.tuttogratis.it/software_gratis/benchmark_hard_disk.html http://translate.google.com/translate?hl=en&sl=it&u=http://www.tuttogratis.it/software_gratis/benchmark_hard_disk.html
Utility from consul in order to test the performances of several unita' removibili (Hard Disk, CD, Flash, etc.). This version includes two types of test: reading multifield and reading random of the fields. Freeware for Win2k/XP (34 Kb)
Disk Bench head velocita' of yours hard disk in one the real situation: that is copying rows from a part to the other in order then to cancel it. You will have thus a test on one true situation and not one simulation. The software needs of having istallato Net the Framework of the Microsoft. Freeware for Win9x/Me/NT/2k/XP (1,3 Mb)
This program touches the velocita' of the hard disk of your PC riporatando gives on the transfer installments to you in reading, the transfer installments in reading random and also the times of access of discs. Freeware for Win9x/Me/NT/2k/XP (40 Kb)
Diagnostic Tool for the Hard Disk Fujitsu in a position to making given analysis S.M.A.R.T., or, puo' making one scan of entire superficial of the disc the field for field in order to verify of the integrita one '. Freeware for Windows (160 Kb)
This program creates a avviabile floppy disk in order launch the tool diagnostic without having programs residents in memory. Drive Fitness Test e' a sure system fveloce and in order to test your discs SCSI and IDE. Freeware for Windows (2,1 Mb)
Version for Linux di Hitachi Drive Fitness Test v 3.40. Freeware (1,4 Mb)
http://www.tuttogratis.it/go.tg?54231 Series of utilities for the Hadr Disk Maxtor. In the package they are comprised a Quick Scan for fast diagnoses, one scan piu' deepened, the formattazione to low level, and other anchor. Freeware for Windows (2,3 Mb)
OS: Win9x/NT/2000/Me Download Homepage Rating: 4/5 - 496K Shareware $24.95
Active SMART is a the hard disk drive monitoring and failure prediction software. It uses S.M.A.R.T. technology to monitor the health status of hard disk drives, prevents data loss and predicts possible drive fail. If a fault is detected, you are notified with various local alerting options or you can enable remote notifications via e-mail or other network mail applications with the drive ID and the time of the first fault. Active SMART monitors all important HDD parameters. You can track every S.M.A.R.T. parameter of your HDD, and get the information about every drive's attribute: attribute's value, it's threshold level and worst value for the attribute. Also it shows the information about HDD: serial number, drive logical information, current mode and other.
You can also download Active SMART SCSI edition with SCSI drives support. Active SMART SCSI edition supports hard drives on SCSI controllers and SCSI RAID arrays. Active SMART SCSI edition supports up to 4 IDE/ATA and up to 8 SCSI drives.
OS: Win2k/XP Download Rating:4/5 - 34K Freeware
CHDDSPEED is a console utility, which tests the performance of various block devices (HDD, CD, Flash, etc.). This version (0.1) includes two tests: multisector read test and random read test.
OS: Win9x/Me/NT/2k/XP Download Homepage Rating: 4/5 - 1.3Mb Freeware
Disk Bench tests your hard drive speed in a real life situation not in a benchmarking environment. All it does is copies a file from A to B, times the time it took, and deletes the file from B again. Theoretically, it can test not only hard drives. You will need to have .net framework installed. You can get this from windowsupdate or from MSDN Downloads.
OS: Win9x/Me/NT/2k/XP Download Homepage Rating: 4/5 - 40K Freeware
This program tests the speed of drives attached to your computer. Disk Speed reports the linear read transfer rate, the random read transfer rate and also the access time of the drive.
OS: Win9x/Me/NT/2k/XP Download Homepage Rating: 4/5 - 64K
DiskSpeed32 is a hard drive speed analyzer. DiskSpeed32 reads sequently sectors of hard drive and plots a dependence of reading speed by cylinder number. Allows to test formatted and not formatted hard drives as well. Allows to check hard drives performance and to find mapped tracks on hard drives.
OS: Windows NT Download Homepage Rating: 5/5 - 567K
This program will measure the performance of your hard drives, running under Windows NT.
OS: WinNT/2k/XP Download Homepage Rating: 4/5 - 13Kb Freeware
This program measures both sustained and burst data transfer rates of your hard disks, cd/dvd-roms and floppy. It features a realtime graphical display.
OS: Win95/98 Download Rating: 5/5 - 3Mb http://www.benchmarkhq.ru/fclick/fclick.php?fid=30 HDD UTILity - is a suite, that consist of five utilities: HDD Alert UTILity, HDD Benchmark UTILity, HDD Control UTILity, HDD Info UTILity, and HDD Test&Repair UTILity! It seems, that it is one of the best in its class!
OS: DOS Download Rating: 5/5 - 437K http://www.benchmarkhq.ru/fclick/fclick.php?fid=31 This is a realistic HDD Benchmark! It also shows you all info about your hard drives and supports SCSI, EIDE, U/DMA, S.M.A.R.T., ect. VERY good russian program. One of the Best!
OS: WinNT/95/98 Download Homepage Rating: 4/5 - 956Kb
HD Tach is a physical performance hard drive test for Windows 95/98 and Windows NT. In Windows 95/98 it uses a special kernel mode VXD to get maximum accuracy by bypassing the file system. HD Tach tests the drive's random access time, sequential read, CPU utilization ect.
OS: Win9x/Me/NT/2k/XP Download Homepage Rating: 4/5 - 220Kb Freeware
IDEdrive interrogates your IDE-drives and displays detailed information about them (geometry, features, transfer modes, etc.).
OS: WinNT/2k/XP Download Homepage Rating: 4/5 - 1.3Mb Freeware
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was developed by the Intel Corporation. Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). Iometer is also available for Linux and Solaris. http://sourceforge.net/project/showfiles.php?group_id=40179&release_id=159909
OS: Win9x/Me/NT/2k/XP Download Homepage Rating: 4/5 - 1Mb Freeware
IOzone is a filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems. The benchmark tests file I/O performance for the following operations: read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, mmap, aio_read, aio_write.
OS: DOS, Win9x/Me/NT/2k/XP Download Homepage Rating: 5/5 - 107Kb Freeware http://mhddsoftware.com MHDD is an utility for fast and informative low-level drive diagnostics. It performs fast (up to 20 Gb in 6 minutes) and accurate diagnostics of all the surface and heads of the HDD irrespective of the data located on it. The program also allows to get rid of the "software" bad sectors with the highest possible speed. MHDD's major features also include: IDE registers monitoring, SMART monitoring, HDD stress-testing, and more!
OS: DOS Download Rating: 3/5 - 40K This program measures the performance of your hard drive.
OS: WinNT4/2000/XP Download Homepage Rating: 4/5 - 375Kb Freeware
Quick Bench is a small utility to check that your disks are configured properly by benching them and getting the average sequential read speed as well as the CPU load. This lets you know that the DMA (Direct Memory Access) is working properly for the disk.
OS: DOS Download Homepage Rating: 4/5 - 96Kb Freeware
SCSITOOL is a diagnostics and benchmarking tool for SCSI (storage) devices. Currently supported device types are: harddisk, tape, cd-rom, optical and removable. It has a load of tests, can low level format any number of drives simultaneously, can copy one source drive to more than one target drives, and more.
OS: DOS Download Homepage Rating: 4/5 - 25Kb Freeware
This little program will read the S.M.A.R.T. information of your hard drive(s). It supports external UDMA-controllers and will predict possible drive fail.
OS: Win95/98 Download Rating: 4/5 - 480Kb
Adaptec?s ThreadMark benchmark measures multithreaded I/O performance on computers running Microsoft Windows NT or Windows 95/98. You can use ThreadMark to compare the performance of different disk drives and host adapters, in order to find the optimal I/O solution for your Windows desktop environment. Because ThreadMark uses the same Win32 API calls that other applications use, it makes unbiased performance measurements on any SCSI or EIDE disk drive or host adapter.
OS: DOS Download Rating: 4/5 - 23Kb Freeware
xHDDSpeed tests the performance of a hard drive. It performs such tests as average seek time, access time, linear speed, etc. The program also gives some information about the drive.
Modern desktop computers are so fast in all respects that we often have the luxury of ignoring performance issues. (People like me who started with a 1MHz processor have no trouble remembering how we lusted after every extra slice of CPU speed.) But if you're in a position where you need or want to maximize your computer's speed, all but the most compute-bound tasks will likely benefit from a faster disk drive system. Which raises the question: Aside from reading specs (which are really just a first cousin to statistics in terms of veracity), how do you evaluate the performance of a disk drive?
My favorite tool in this area is iozone, a relatively small but full-featured open source program for benchmarking disk systems. iozone was written by William Norcott and Don Capps, with contributions credited in the source code from several other people, and it can be built for over two dozen OS's and versions, including Linux, Windows (32-bit), several BSD's, and Solaris.
by Andr? D. Balsa v0.4, 26 November 1997
This is the second article in a series of 4 articles on GNU/Linux Benchmarking, to be published by the Linux Gazette. The first article presented some basic benchmarking concepts and analyzed the Whetstone benchmark in more detail. The present article deals with practical issues in GNU/Linux benchmarking: what benchmarks already exist, where to find them, what they effectively measure and how to run them. And if you are not happy with the available benchmarks, some guidelines to write your own. Also, an application benchmark (Linux kernel 2.0.0 compilation) is analyzed in detail.
The DOs and DON'Ts of GNU/Linux benchmarking
A roundup of benchmarks for Linux
Devising or writing a new Linux benchmark
An application benchmark: Linux 2.0.0 kernel compilation with gcc
4.1 General benchmark features
4.2 Benchmarking procedure
4.3 Examining the results
GNU/Linux is a great OS in terms of performance, and we can hope it will only get better over time. But that is a very vague statement: we need figures to prove it. What information can benchmarks effectively provide us with? What aspects of microcomputer performance can we measure under GNU/Linux?
Kurt Fitzner reminded me of an old saying: "When performance is measured, performance increases."
Let's list some general benchmarking rules (not necessarily in order of decreasing priority) that should be followed to obtain accurate and meaningful benchmarking data, resulting in real GNU/Linux performance gains:
Use GPLed source code for the benchmarks, preferably easily available on the Net.
Use standard tools. Avoid benchmarking tools that have been optimized for a specific system/equipment/architecture.
Use Linux/Unix/Posix benchmarks. Mac, DOS and Windows benchmarks will not help much.
Don't quote your results to three decimal figures. A resolution of 0.1% is more than adequate. Precision of 1% is more than enough.
Report your results in standard format/metric/units/report forms.
Completely describe the configuration being tested.
Don't include irrelevant data.
If variance in results is significant, report alongside results; try to explain why this is so.
Comparative benchmarking is more informative. When doing comparative benchmarking, modify a single test variable at a time. Report results for each combination.
Decide beforehand what characteristic of a system you want to benchmark. Use the right tools to measure this characteristic.
Check your results. Repeat each benchmark once or twice before publicly reporting your results.
Don't set out to benchmark trying to prove that equipment A is better than equipment B; you may be in for a surprise…
Avoid benchmarking one-of-a-kind or proprietary equipment. This may be very interesting for experimental purposes, but the information resulting from such benchmarks is absolutely useless to other Linux users.
Share any meaningful information you may have come up with. If there is a lesson to be learned from the Linux style of development, it's that sharing information is paramount.
These are some benchmarks I have collected over the Net. A few are Linux-specific, others are portable across a wide range of Unix-compatible systems, and some are even more generic.
UnixBench. A fundamental high-level Linux benchmark suite, Unixbench integrates CPU and file I/O tests, as well as system behaviour under various user loads. Originally written by staff members at BYTE magazine, it has been heavily modified by David C. Niemi.
BYTEmark as modified by Uwe Mayer. A CPU benchmark suite, reporting CPU/cache/memory , integer and floating-point performance. Again, this test originated at BYTE magazine. Uwe did the port to Linux, and recently improved the reporting part of the test.
Xengine by Kazuhiko Shutoh. This is a cute little X window tool/toy that basically reports on the speed with which a system will redraw a coloured bitmap on screen (a simulation of a four cycle engine). I like it because it is unpretentious while at the same time providing a useful measure of X server performance. It will also run at any resolution and pixel depth.
Whetstone. A floating point benchmark by Harold Curnow.
Xbench by Claus Gittinger. Xbench generates the famous xstone rating for Xserver performance comparisons.
XMark93. Like xbench, this is a script that uses X11's x11perf and computes an index (in Xmarks). It was written a few years later than xbench and IMHO provides a better metric for X server performance.
Webstone 2.01. An excellent tool for Web server performance testing. Although Webstone is copyight by Silicon Graphics, it's license allows free copying and examination of the source code.
Stream by John D. McCalpin. This program is based on the concept of "machine balance" (sustainable memory bandwidth vs. FPU performance). This has been found to be a central bottleneck for computer architectures in scientific applications.
Cachebench by Philip J. Mucci. By plotting memory access bandwidth vs. data size, this program will provide a wealth of benchmarking data on the memory subsystem (L1, L2 and main memory).
Bonnie by Tim Bray. A high-level synthetic benchmark, bonnie is useful for file I/O throughput benchmarking.
Iozone by Bill Norcott. Measures sequential file i/o throughput. The new 2.01 version supports raw devices and CD-ROM drives.
Netperf is copyright Hewlett-Packard. This is a sophisticated tool for network performance analysis. Compared to ttcp and ping, it verges on overkill. Source code is freely available.
Ttcp. A "classic" tool for network performance measurements, ttcp will measure the point-to-point bandwidth over a network connection.
Ping. Another ubiquitous tool for network performance measurements, ping will measure the latency of a network connection.
Perlbench by David Niemi. A small, portable benchmark written entirely in Perl.
Hdparm by Mark Lord. Hdparm's -t and -T options can be used to measure disk-to-memory (disk reads) transfer rates. Hdparm allows setting various EIDE disk parameters and is very useful for EIDE driver tuning. Some commands can also be used with SCSI disks.
Dga with b option. This is a small demo program for XFree's DGA extension, and I would never have looked at it were it not for Koen Gadeyne, who added the b command to dga. This command runs a small test of CPU/video memory bandwidth.
MDBNCH. This is a large ANSI-standard FORTRAN 77 program used as an application benchmark, written by Furio Ercolessi. It accesses a large data set in a very irregular pattern, generating misses in both the L1 and L2 caches.
Doom :-) Doom has a demo mode activated by running doom -timedemo demo3. Anton Ertl has setup a Web page listing results for various architectures/OS's.
All the benchmarks listed above are available by ftp or http from the Linux Benchmarking Project server in the download directory: www.tux.org/pub/bench or from the Links page.
We have seen last month that (nearly) all benchmarks are based on either of two simple algorithms, or combinations/variations of these:
Measuring the number of iterations of a given task executed over a fixed, predetermined time interval.
Measuring the time needed for the execution of a fixed, predetermined number of iterations of a given task.
We also saw that the Whetstone benchmark would use a combination of these two procedures to "calibrate" itself for optimum resolution, effectively providing a workaround for the low resolution timer available on PC type machines.
Note that some newer benchmarks use new, exotic algorithms to estimate system performance, e.g. the Hint benchmark. I'll get back to Hint in a future article.
Right now, let's see what algorithm 2 would look like:
initialize loop_count start_time = time() repeat benchmark_kernel() decrement loop_count until loop_count = 0 duration = time() - start_time report_results()
Here, time() is a system library call which returns, for example, the elapsed wall-clock time since the last system boot. Benchmark_kernel() is obviously exercising the system feature or characteristic we are trying to measure.
Even this trivial benchmarking algorithm makes some basic assumptions about the system being tested and will report totally erroneous results if some precautions are not taken:
If the benchmark kernel executes so quickly that the looping instructions take a significant percentage of total loop processor clock cycles to execute, results will be skewed. Preferably, benchmark_kernel() should have a duration of > 100 x duration of looping instructions.
Depending on system hardware, one will have to adjust loop_count so that total length duration > 100 x clock resolution (for 1% bechmark precision) or 1000 x clock resolution (for 0.1% benchmark precision). On PC hardware, clock resolution is 10 ms.
We mentionned above that we used a straightforward wall-clock time() function. If the system load is high and our benchmark gets only 3% of the CPU time, we will get completely erroneous results! And of course on a multi-user, pre-emptive, multi-tasking OS like GNU/Linux, it's impossible to guarantee exclusive use of the CPU by our benchmark.
You can substitute the benchmark "kernel" with whatever computing task interests you more or comes closer to your specific benchmarking needs.
Examples of such kernels would be:
For FPU performance measurements: a sampling of FPU operations.
Various calculations using matrices and/or vectors.
Any test accessing a peripheral i.e. disk or serial i/o.
For good examples of actual C source code, see the UnixBench and Whetstone benchmark sources.
The more one gets to use and know GNU/Linux, and the more often one compiles the Linux kernel. Very quickly it becomes a habit: as soon as a new kernel version comes out, we download the tar.gz source file and recompile it a few times, fine-tuning the new features.
This is the main reason for proposing kernel compilation as an application benchmark: it is a very common task for all GNU/Linux users. Note that the application that is being directly tested is not the Linux kernel itself, it's gcc. I guess most GNU/Linux users use gcc everyday.
The Linux kernel is being used here as a (large) standard data set. Since this is a large program (gcc) with a wide variety of instructions, processing a large data set (the Linux kernel) with a wide variety of data structures, we assume it will exercise a good subset of OS functions like file I/O, swapping, etc and a good subset of the hardware too: CPU, memory, caches, hard disk, hard disk controller/driver combination, PCI or ISA I/O bus. Obviously this is not a test for X server performance, even if you launch the compilation from an xterm window! And the FPU is not exercised either (but we already tested our FPU with Whetstone, didn't we?). Now, I have noticed that test results are almost independent of hard disk performance, at least on the various systems I had available. The real bottleneck for this test is CPU/cache performance.
Why specify the Linux kernel version 2.0.0 as our standard data set? Because it is widely available, as most GNU/Linux users have an old CD-ROM distribution with the Linux kernel 2.0.0 source, and also because it in quite near in terms of size and structure to present-day kernels. So it's not exactly an out-of-anybody's-hat data set: it's a typical real-world data set.
Why not let users compile any Linux 2.x kernel and report results? Because then we wouldn't be able to compare results anymore. Aha you say, but what about the different gcc and libc versions in the various systems being tested? Answer: they are part of your GNU/Linux system and so also get their performance measured by this benchmark, and this is exactly the behaviour we want from an application benchmark. Of course, gcc and libc versions must be reported, just like CPU type, hard disk, total RAM, etc (see the Linux Benchmarking Toolkit Report Form).
Basically what goes on during a gcc kernel compilation (make zImage) is that:
Gcc is loaded in memory,
Gcc gets fed sequentially the various Linux kernel pieces that make up the kernel, and finally
The linker is called to create the zImage file (a compressed image file of the Linux kernel).
Step 2 is where most of the time is spent.
This test is quite stable between different runs. It is also relatively insensitive to small loads (e.g. it can be run in an xterm window) and completes in less than 15 minutes on most recent machines.
Do I really have to tell you where to get the kernel 2.0.0 source? OK, then: ftp://sunsite.unc.edu/pub/Linux/kernel/source/2.0.x or any of its mirrors, or any recent GNU/Linux CD-ROM set with a copy of sunsite.unc.edu. Download the 2.0.0 kernel, gunzip and untar under a test directory (tar zxvf linux-2.0.tar.gz will do the trick).
Cd to the linux directory you just created and type make config. Press <Enter> to answer all questions with their default value. Now type make dep ; make clean ; sync ; time make zImage. Depending on your machine, you can go and have lunch or just an expresso. You can't (yet) blink and be done with it, even on a 600 MHz Alpha. By the way, if you are going to run this test on an Alpha, you will have to cross-compile the kernel targetting the i386 architecture so that your results are comparable to the more ubiquitous x86 machines.
This is what I get on my test GNU/Linux box:
186.90user 19.30system 3:40.75elapsed 93%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (147838major+170260minor)pagefaults 0swaps
The most important figure here is the total elapsed time: 3 min 41 s (there is no need to report fractions of seconds).
If you were to complain that the above benchmark is useless without a description of the machine being tested, you'd be 100% correct! So, here is the LBT Report Form for this machine:
LINUX BENCHMARKING TOOLKIT REPORT FORM
CPU
> > Will this not influence the performance a lot since the head of > > the disk has to walk all over the disk? Thus making comparisons > > practically useless since you never know the state of fragmentation? > > I think you've got it exactly right here. Whenever I do a > benchmark on a > disk, I follow the following basic plan: > > 1) Use a freshly formatted disk > 2) Disable all but 32M RAM > 3) Switch to single-user mode > 4) Measure performance at start (maximum) and end (minimum) > 5) On large disks (>20G or so), try the first 1G and the last > 1G by using > fdisk to create partitions there > 6) Use tiotest, NOT bonnie! Try multiple threads (I use 1, 2, > 4, 8, 16, > 32, 64, 128, 256 threads - this is perhaps excessive!)
What size datasets are you using? Bonnie++ is still a good benchmark, although it stresses things differently. The maximum number of threads that you should need to (or probably even want to) run is between 2x and 3x the number of disks that you have installed. That should ensure that every drive is pulling 1 piece of data, and that there is another thread that is waiting for data while that one is being retrieved.
Gregory Leblanc
http://wauug.erols.com/~balsa/linux/benchmarking/HOWTO/Benchmarking-HOWTO-0.15-3.html http://www.linux.org/docs/ldp/howto/Benchmarking-HOWTO-3.html
This is a proposal for a basic benchmarking toolkit for Linux, to be expanded and improved. Take it for what it's worth, i.e. as work-in-progress.
Newsgroups: comp.os.linux.development.system Date: 1997/07/09
I wrote a benchmark in order testing Linux disk speed are compare result with other OS especialy NT and QNX.
You can retreive source at:
General aim
I want to test disk speed with a maximum of contency, in order build video multimedia application, in order limitting I/O board cache size system has to be as stable as possible. This benchmark intent to first determine disk maximum speed and then check how big fluxtuation can be when application is reading disk.
I currentely tested
pPro with IDE: pProc with Adpated Ultra Wide bi pProc with DPT raid0 (2disk array and 32MB cache)
The benchmark provide 3 tests:
read write read and write (copy)
Avaliable options are:
block size synchronization rate (give system constancy) file size
First result
IDE and PPro a little more than 3MB/s PPro SCSI Adapted less than 4MB bi PPro & DPT Raid0 a little more than 7MB
HOW to
First recompile benchmark (should be easy at least on Linux) Check for raw performances (with no synchro) Check constancy with a synchro (usualy 16 or 33ms) with rate from 1MB to your system Limit by increment of 1MB try to find out your best block size on my system 64KB was usualy in good value for block size.
Final result
If you send me your result by mail (phillf@iu-vannes.fr) I will send a global result on news at the end of August.
Warning
Take a file big enougth on our University BiPro with 256MB Ram we notice than Linux is able to read 150MB in less than 1seconde (impossible except with everything in cache).
POSBB is a new benchmark program,with which you will be able to test and compare different computers,Operating Systems,CPUs,and even C compilers. It will be able to perform many tests,and the user will select wich test want to use. Later also graphical frontends for many platforms will arrive.
POSBB means Portable OS-Based Benchmark.
Portable because there will be portings for many platforms and I'll send the sources to everyone who wants to help me.
OS-Based because it will use functions of the Operating Systems,or ANSI C functions,but NEVER custom routines. I'll do so because I want to test the performances that real programs would have on a given computer and Operating System,not the raw power of the hardware.
Benchmark doesn't need any other word.
It does or will do these tests:
Memory copying speedDone
Int Math speedDone
FP Math speed Done
Sorting speed
Graphic speed (pixels,lines,polygons,etc…)Done
Text output speed (both graphical and to stdout)Done (only to stdout at the moment)
Disk speed (reading and writing to files,not directly to the disk)Done
GUI drawing speed (drawing of a very complex user interface,probably MUI on the Amiga)
Multitasking performances (using two or more threads)
JPEG / MPEG encoding-decoding speed
Data compression-decompression speed,with various algoritms
Ray tracing speed using POV-RAY freeware raytracer
…
"Done" Says the test already runs. Any other suggestions are welcome.
At this time there's two version of Posbb: Amiga and "generic". The first do all the tests I've finished, the seconds lack graphical ones and it will enver have them. In fact the "generic" version is compatible with every computer on Earth which has a C compiler. You can compile it on every platforms without changing anything.
Click here http://www.pragmanet.it/hppersonali/user827/myprgs/posbb.lha to download via HTTP the latest archive available. (Amiga v0.23,generic v0.12) (about 146 kb) It will be available on the Aminet soon ( check util/moni/Posbb.lha ). In the archive you will find the Amiga (m68k and PPC,optimized for all 680x0s),Linux (i386) and MS Windows (32bit) executables,sources,docs,results of tests on some machines.
Last Update 02 May 98
documented on: 2007.01.12