Archive-name: arch-storage/part2
Version: $Header: /nfs/yelo/rdv/comp-arch-storage/faq/RCS/FAQ-2.draft,v 1.37 98/01/16 18:20:09 rdv Exp $
Posting-Frequency: monthly
Rod Van Meter, Joe Stith, and the gang on comp.arch.storage
Information on disk, tape, MO, RAID and SSD can be found in part 1 of
the FAQ. Part 2 covers file systems, hierarchical storage management,
backup software, robotics, benchmarking, MTBF and miscellaneous
topics.
1. Standards
1.1. ANSI X3B5 {None}
1.2. IEEE Mass Storage System Ref Model (OSSI) {Brief, 6/1/95}
1.3. ECMA - European Computer Manufacturers Association {None}
1.4. System Independent Data Format (SIDF)
2. I/O Related Email Lists
3. Hierarchical Storage Management
3.1. Unitree {Brief}
3.1.1. Epoch vs Unitree
3.2. National Storage Lab {Brief}
3.3. HIARC {New}
3.4. Epoch (also known as StorageTek's NearNet) {Brief}
3.5. Zetaco/NETstor {Brief}
3.6. R-Squared Infinity IFS 2 {Brief}
3.7. AMASS
3.8. Tracer XFS {None}
3.9. Metior
3.10. NAStore {Brief}
3.11. DMF {Brief}
3.12. FileServ {Brief}
3.13. Cray Research's Open Storage Manager {Brief}
3.14. T-mass {None}
3.15. HP OpenView OmniStorage
3.16. Platinum NetArchive-HSM {Brief, New}
3.17. Large Storage Configurations {Brief,New}
3.18. Unix HSM Vendor List
3.19. Mainframe
3.20. PC & PC Server Oriented Packages
3.20.1. HP Optical Jukebox Storage Solution
3.20.2. Chili Pepper Software
3.20.3. Cheyenne ARCserve
3.21. DATMAN {Brief}
3.22. Windows NT
3.23. Other Non-Unix HSM
3.24. Tapes as Disks {Brief, New}
4. Backup Software
4.1. PC-Oriented Backup Packages
4.2. Unix Packages
4.2.1. Spectra Logic Alexandria
4.2.2. ADSTAR Distributed Storage Manager
4.2.3. NetWorker
4.2.4. BudTool {Brief}
4.2.5. HP OmniBack II {Brief, New}
4.2.6. Workstation Solutions {Brief}
4.2.7. Amanda {Brief, New}
4.2.8. Remote Backup or Mirroring {Brief, New}
5. Tape and Autochanger Management Software
5.1. REELlibrarian
5.2. ANT Medium Changer
5.3. Tapes 3000 {Brief}
5.4. Others
6. Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
6.1. 8mm {Brief}
6.1.1. Exabyte {Brief}
6.1.1.1. EXB-10h
6.1.1.2. EXB-210
6.1.1.3. EXB-220
6.1.1.4. EXB-440/480
6.1.1.5. EXB-10
6.1.1.6. EXB-10i
6.1.1.7. EXB-10e
6.1.1.8. EXB-120
6.1.2. ADIC {Brief, New}
6.1.3. Storage Tek (was Lago) DataWheel {Brief}
6.1.4. ACL {None}
6.1.5. Cambridge On-Line Storage {Brief}
6.1.6. Spectra Logic {Brief}
6.1.7. Qualstar {Brief}
6.2. 3480
6.2.1. StorageTek {Brief}
6.2.2. EMASS (was GRAU) {Brief}
6.2.3. 3590 (Magstar,NTP) {Brief}
6.3. 4mm {Brief}
6.3.1. Cambridge On-Line storage {Brief}
6.3.2. Spectra Logic {Brief}
6.3.3. HP 4mm {Brief}
6.3.4. Storage Tek Datawheel {Brief}
6.3.5. Diverse Logistics Libra {Brief, New}
6.3.6. Qualstar {Brief, New}
6.3.7. ADIC {Brief, New}
6.4. VHS {Brief}
6.4.1. MountainGate (was Metrum)
6.5. Digital Linear Tape (DLT) (Quantum) {Brief}
6.5.1. TZ877 {Brief}
6.5.2. TL820 {Brief}
6.5.3. MountainGate
6.5.4. Breece Hill {Brief}
6.5.5. Odetics {Brief}
6.5.6. MediaLogic ADL
6.5.7. ADIC {Brief, New}
6.6. D-2
6.6.1. Ampex
6.6.2. Odetics
6.7. ID-1
6.7.1. Sony DMS, PetaSite {Brief}
6.8. Optical Disk (MO,WORM) Libraries
6.8.1. Hitachi 448 GB optical library
6.8.2. HP MO Autochangers
6.8.3. Maxoptix MO Autochangers
6.8.4. MountainGate {Brief}
6.8.5. DISC DocuStore {Brief}
6.8.6. Kodak {Brief}
6.8.7. Sony {Brief}
6.9. CD-ROM Jukeboxes
6.9.1. Pioneer
6.9.2. CyberTower {Brief, New}
6.9.3. NSMJukebox {Brief, New}
6.9.4. Nakamichi {Brief, New}
6.9.5. CDI Juke Box Library {Brief,New}
6.9.6. K & S M-200 {Brief, New}
6.9.7. DISC {Brief, New}
6.9.8. Meridian {Brief, New}
7. File Systems
7.1. NFS {Brief}
7.1.1. NFS V3
7.2. AFS {Brief}
7.3. DFS {Brief}
7.4. Log based file systems
7.5. Mainframe File Systems
7.6. Parallel System File Systems
7.7. Microsoft Windows NT {Brief}
7.8. Large Unix File Systems
7.9. Non-Unix Large File Systems
8. (Device) Interfaces
8.1. SCSI {Full}
8.1.1. Single ended vs differential
8.1.2. Asynchronous vs Synchronous Transfers
8.1.3. SCSI-I vs SCSI-II vs SCSI-III
8.1.4. Fast-Wide SCSI
8.1.5. Shared Busses / Performance {Brief}
8.1.6. Cabling/Hot Plugging {Brief}
8.1.7. Third Party Transfers/Separation of Control & Data Paths {Brief}
8.2. IDE {Brief}
8.3. IPI {None}
8.4. HIPPI {Brief}
8.4.1. HIPPI-6400 {Brief}
8.5. Ultranet {Brief}
8.6. Ethernet {Brief}
8.7. FDDI {None}
8.8. Fibre Channel Standard (FCS)
8.9. ESCONN/SBCON {Brief}
8.10. IEEE P1394 (Serial Bus)
8.11. Serial Storage Architecture (SSA)
8.12. S2I: IEEE P1285 Scalable Storage Interface
8.13. Multibus, Unibus, Mainframe Channels, and other history {None}
9. Other
9.1. Video vs Datagrade tapes {brief, 5/94}
9.2. Compression
10. Benchmarking
11. Mass Storage Conferences
11.0.1. THIC Tape Head Interface Committee {Brief, New}
12. MTBF (Mean Time Between Flareups, er, Failures)
13. Mass Storage Reports
14. Network-Attached Peripherals {Brief}
15. Other References
15.1. Print
15.2. Web
15.3. Newsgroups
15.4. Research Papers
16. ORIGINAL CALL FOR VOTES
17. Original Author's Disclaimer and Affiliation:
18. Copyright Notice
19. Additional Topics to be added
------------------------------
Subject: [1] Standards
From: Standards
There's a killer supply of computer-related standards at
http://www.cmpcmm.com/cc/. Fibre Channel and several
mass-storage-related items can be found there.
The ANSI and IEEE standards can be purchased in hardcopy form (the
only way some of them are available) from Global Engineering
Documents, (800)854-7179 or (303)792-2181.
Subject: [1.1] ANSI X3B5 {None}
From: Standards
Subject: [1.2] IEEE Mass Storage System Ref Model (OSSI) {Brief, 6/1/95}
From: Standards
The Storage Systems Standards Working Group now has a WWW page at
http://www.arl.mil/IEEE/ssswg.html.
Version 5 of the model is available via
ftp://swedishchef.lerc.nasa.gov:mass_store/ as the files
ossiv5.ps{1,2,3}.
The OSSI (Open Storage Systems Interconnection) Reference Model (its
new name) "provides the framework for a series of standards for
application and user interfaces to open storage systems." One of its
prime purposes is to define a common vocabulary. Claiming compliance
with the model at the moment has little practical value as far as
interoperation of different pieces from different vendors goes (which
is one of the ultimate aims of the still-distant standards that may
develop from this model).
Subject: [1.3] ECMA - European Computer Manufacturers Association {None}
From: Standards
Subject: [1.4] System Independent Data Format (SIDF)
From: Standards
This is a data format for tapes and removable disks, to facilitate
interchange between hardware and software platforms. See the FAQ at
http://www.mcs.net/~jgast/sidf.html.
Subject: [2] I/O Related Email Lists
From: I/O Related Email Lists
Here is a list of email reflectors for those who need to be deeply
involved in the technical details of various interfaces and standards.
X3T10/95-010
r0
April 6,
1995
I/O Interface Related Reflectors (mailing lists)
Subscribe/ majordomo/
Reflector Unsubscribe Broadcast to listserv
Name Address Reflector keyword
------------- -------------------- ------------------------
-----------------
SCSI scsi-request@symbios scsi@symbios.com n/a (human)
.com
ATA majordomo@dt.wdc.com ata@dt.wdc.com ata
ATAPI majordomo@dt.wdc.com atapi@dt.wdc.com atapi
SSA majordomo@dt.wdc.com ssa@dt.wdc.com ssa
IDETAPE majordomo@dt.wdc.com idetape@dt.wdc.com idetape
Disk Attach majordomo@dt.wdc.com disk_attach@dt.wdc.com disk_attach
10bit majordomo@dt.wdc.com 10bit@dt.wdc.com 10bit
CD-Recordable majordomo@dt.wdc.com cdr@dt.wdc.com cdr
System Issues majordomo@dt.wdc.com si@dt.wdc.com si
MultiMedia majordomo@dt.wdc.com mmc@dt.wdc.com mmc
IEEE P1394 bob.snively@eng.1. p1394@1.com n/a (human)
com
SFF bob.snively@eng.1. sff_reflector@1.com n/a (human)
com
IPI majordomo@think.com ipi-ext@think.com ipi-ext
HIPPI majordomo@think.com hippi-ext@think.com hippi-ext
Fibre Chan. majordomo@think.com fibre-channel-ext@think.
fibre-channel-ext
com
FC IP Prot. majordomo@think.com fc-ip-ext@think.com fc-ip-ext
PCMCIA listserv@cirrus.com pcmcia-gen@cirrus.com pcmcia-gen
FC Class 4 majordomo@northyork. fc-class4@northyork.com fc-class4
hp.com
FC Isoch. majordomo@northyork. fc-isoch@northyork.hp. fc-isoch
hp.com com
All of the majordomo and listserv reflectors are automatic. To
subscribe or unsubscribe, send a message to the subscribe/unsubscribe
address with a line in the message body (not the subject line) of the
following format:
command reflector_name [your_email_address]
NOTE: At least for the reflectors at majordomo@dt.wdc.com, your email
address is optional. If you include it and it doesn't match the
address in the email headers, there will be a delay while humans
verify your email address.
examples:
subscribe ata
subscribe ssa
subscribe ssa person@company.com
subscribe atapi
subscribe mmc
subscribe fibre-channel-ext person@company.com
subscribe pcmcia-gen person@company.com
unsubscribe ssa person@company.com
help
lists
The other reflectors are managed by humans who are a little less picky
about the request format, but not quite as prompt. Please include
your name, email address, phone, and fax numbers in the message body
for the human-managed reflectors.
(with permission from John Lohmeyer, 95/5/10)
Subject: [3] Hierarchical Storage Management
From: Hierarchical Storage Management
HSM systems transparently migrate files from disk to optical disk
and/or magnetic tape, usually robotically accessible. Then when files are
accessed by a user, they transparently move them back to disk.
Watch for maximum file size limitations, sometimes limited by the
size of the media used, sometimes by the server's OS, and sometimes
neither.
Some offer integrated backup. Some will manage multiple copies of
files for data reliability.
Some offer integrated migration from other systems (ie, file servers
and/or workstations) to the central location disks, then to the central
location robotics. This generally requires changes to the on-disk file
system format on the migration clients.
An item to watch for is that the file management may be exactly like
Unix -- that is, all files appear to be online, and once they're
deleted, they're gone forever, even though the data may still be on
tape.
All of the subsections here are Unix-compatible (in various flavors)
unless indicated otherwise.
Additional Information:
See also _DEC Professional_, February 1993, Page 40 and _Client/Server
Today_, Dec. '94, p. 60.
The System-Managed Storage Guide by Howard W. Miller, $225 for first
copy, $75 for additional copies for same company available from The
Information Technology Institute, 136 Orchard Street, Byfield,
Massachusetts, 01922-1605.
(stith@fnal.gov)
Thomas Woodrow did an evaluation of NAStore, FileServ, DMF and Unitree
in 1993. It can be obtained through
http://www.nas.nasa.gov/NAS/TechReports/RNDreports/RND-93-014/RND-93-014.html
or the Proc. 3rd NASA Goddard Conference on Mass Storage Systems and
Technologies, Oct. 1993, pp. 187--216. Somewhat dated now but
excellent methodology for comparing HSMs.
Subject: [3.1] Unitree {Brief}
From: Hierarchical Storage Management
The uncle of UNIX HSMs. Developed primarily at Lawrence Livermore
National Laboratories. Commercialized by a company called DISCOS,
then sold to OpenVision. UniTree was sold to UniTree Software in
December, 1994. See http://www.unitree.com.
Many versions exist on different hardware platforms, including a
National Storage Lab (NSL) UniTree commercialized by IBM - Fed
Systems. It's also available on SGI, Convex, and Amdahl hardware, at
least.
See also "Epoch vs Unitree" below
For Convex, try
Jim Wilson
214-497-3085
jrwilson@convex.com
Business Development
Data Management Applications
Convex Computer Corporation
For most other platforms, call Open Vision at (800)223-OPEN or
(510)426-6400.
New info:
The latest release of UniTree, V1.9.1, has the following changes:
- Available directly from UniTree Software Inc.
- Support DEC, HP-UX, SGI, Sun
- GUI(Tcl/Tk) tools for installation and administration
- New name database structure
- Common Message Logger
- Parallel Migration and Staging
- Multiple Storage Hierarchy (Optical/Tape)
- FTP performance improvements (Read/Write 20MBs/16MBs)*
- NFS performance improvements (Read/Write 3.5MBs/2MBs)*
- Rule-based dynamic migration
- Support for new robots (e.g., STK 97xx)
- Support >2GB disk partitions on Sun
- 64K File Families
- Configurable media and drive types
- Departmental File Server Configuration
- Compatible with most backup software (Legato, CAM, SMArch)
Demo copy available for download from web site: www.unitree.com
New resellers in Asia, Europe, Australia
* Measured on a dual cpu Sun Ultra3000 with 256MB and 10 disks
--
Francis Kim Phone: (510) 833-3460
Director of Sales and Marketing FAX: (510) 833-9345
Unitree Software, Inc. e-mail: francis@unitree.com
11875 Dublin Blvd. Suite A200E WWW: http://www.unitree.com
Dublin, Ca. 94568
Subject: [3.1.1] Epoch vs Unitree
From: Hierarchical Storage Management
(Note: this evaluation is old, and should be taken with a
grain of salt. rdv, 3/96)
(6/93) We just bought both last year. We bought an Epoch I
with the 20 GB EO and 327 GB worm. We will be upgrading it to an
Epoch II soon. We also bought Unitree from Titan to run on a Silicon
Graphics server and hook up to the STK 3480 silo. We hope to add more
silos eventually.
Unitree is licensed based on storage capacity while Epoch is not.
There may be an exception to this - STK just began reselling Epoch as
the front end for their silos and I'm not sure how they handle
licensing.
My office mate and I (I handle Epoch, he handles Unitree) have enjoyed
comparing the merits/demerits of each over the last year. Comparison
in our case is slightly slanted due to the fact that the Epoch has
optical disk while the Unitree system has 3480 tape - so some
observations have more to do with media advantages/disadvantages.
Unitree
+ Allows large files - can span volumes
+ Allows you to enlarge the cache easily, allows very large
cache
+- Unitree has replaced several UNIX utilities with their own
(FTP, NFS and the file system). This allows certain features to
work but is generally slower and disallows access to the archive when
you are on the server itself - unless you NFS mount!
+ Allows deleted files to be saved for a specified time
+ Allows multiple copies of files to be kept
+ Data is copied to archive soon after creation
+ Unitree runs on several different platforms
- Does not allow access to data until it is completely
reloaded
- Behaves poorly with small files (due to necessary overhead)
- Unitree is licensed to several vendors, so versions differ
- NFS access is so slow that we recommend it not be used for
file transfer - only for ls and du, etc. Use FTP.
- The Silicon Graphics version is still new and has some
problems
Epoch
+ Allows access to the data as soon as part of it is loaded
+ Company seems serious about reputation and support
+ The Epoch II is based on a SUN system, with few
modifications
+ Data is copied to archive only when the cache space is
needed
+ All native UNIX transfer methods work - NFS, FTP and RCP
+ Add on products greatly simplify backup and extend
archiving features to other systems.
- Deleted files are gone forever
- Currently only available on SUN. This will change.
- Cannot span volumes yet - limiting file size
- Has the SUN limitation of 2 Gb per filesystem. This would
be a bigger problem if you used it for a 3480 silo.
{Note 2GB of Magnetic Disk limit, not the entire HSM store}
- Behaves poorly with small files (due to necessary overhead)
- Since inodes are kept on magnetic cache, you must take
into account the maximum number of files you will ever need.
- Since inodes are always on disk, certain disk operations
can take forever since all inodes must be examined.
- Enlarging a magnetic disk filesystem which has associated
archive media requires you to offload all data and then reload it.
If anyone has found another way, I would like to hear about
it.
{Others did offer some easier work-arounds}
In all fairness to Titan, they have been addressing any problems and
it has been improving. Epoch too plans to address some of their
shortcomings. We are looking forward to growing with both products.
The likelihood that the various flavors of Unitree will standardize
depends on what happens with Discos. My guess is that some
features/enhancements will be filtered back to the base product
released by Discos. Bye...
(bodoh@dgg.cr.usgs.gov, 152.61.192.66, Tom Bodoh, USGS/EROS Data
Center, Sioux Falls, SD)
Subject: [3.2] National Storage Lab {Brief}
From: Hierarchical Storage Management
NSL is an industry consortium (American companies only) that has a
version of Unitree, and is creating their own new High Performance
Storage System.
HPSS, among other features, supports striping of removable media, and
full 64-bit files. Some of the work is being done at LLNL, where
UniTree was originally developed.
There's a good overview reachable at
http://www.ccs.ornl.gov/HPSS/HPSS.html.
(rdv,95/1/12)
Subject: [3.3] HIARC {New}
From: Hierarchical Storage Management
HIARC HSM runs on Solaris 2.4 and above. Slides in at the vnode
layer. Supports 4mm, 8mm, 3480, DLT, VHS, D-1 and D-2 tape drives,
and appropriate robotics (I don't have a specific list). Removable
media formats are standard (_which_ standard, I don't know). Pricing
from $4k to $25k is reasonable for the functionality. See
http://www.hiarc.com. (rdv, 97/3/20)
Subject: [3.4] Epoch (also known as StorageTek's NearNet) {Brief}
From: Hierarchical Storage Management
See also "Epoch vs Unitree" in Appendix
Subject: [3.5] Zetaco/NETstor {Brief}
From: Hierarchical Storage Management
NETstor can be reached at netstor-sales@netstor.com
NETstor, Inc. (formerly Zetaco, Inc.) is a leading provider of
hierarchical online mass-storage systems for open systems. Primarily
NFS accessable systems with magnetic disks and optical-disk libraries.
They have marketing agreements with Digital Equipment Corp, and
Hewlett-Packard.
(stith@fnal.gov)
Netstor was bought by Cheyenne, and is now sold by them
(lily@access.digex.com, 10/95).
Subject: [3.6] R-Squared Infinity IFS 2 {Brief}
From: Hierarchical Storage Management
Contact: Steve Wine, Manager, Mass Storage Products, R-Squared, 11211
East Arapahoe Rd, Englewood, CO 80112, 303/799-9292 or FAX 303/799-9297
Subject: [3.7] AMASS
From: Hierarchical Storage Management
From Advanced Archival Products. Supports a huge range of devices,
autochangers, and operating systems. Block-based movement of data
between the hard disk cache and tape or optical tertiary storage.
Systems run from a few gigabytes up to at least 12 TB, with prices
dependent on capacity. New versions allow multiple cache disks. Slips
right in to the VFS layer and looks like a normal Unix file system,
with the plusses and minuses that entails. No file versioning or
multiple copies yet. File creation is an Achilles' heel on
performance. Since it's block based, files can be larger than a piece
of media. Separate product DataMgr will migrate files from client
machines to the AMASS server automatically (with FS changes, of
course).
AMASS is now owned by EMASS, and you can find info at
http://www.emass.com/Products/Software_Products/AMASS/AMASS_Top.html.
(rdv, 1996/3/27)
Subject: [3.8] Tracer XFS {None}
From: Hierarchical Storage Management
Subject: [3.9] Metior
From: Hierarchical Storage Management
Metior (pronounced like meteor) is targetting an incredibly broad
market, from laptops with removable media through supercomputers, with
prices from $650(!) to $118K. They handle multiple coordinated copies,
so off-site backup can be automatic. Can do migration for client
machines (with appropriate software licenses and changes to the file
system). The hierarchy seems to be extremely flexible, variable on a
per-user or per-group basis. Machines without client licenses can
mount the Metior FS using NFS. Runs on Suns, SGI, and HP 9000/700. ANT
is new, and they've only got a handful of customers so far, but it
looks _very_ interesting.
(info from habbott@csn.org, written by rdv, so it's my fault if it's
not accurate) (rdv,94/7/7)
More information available on the WWW FAQ version.
Also see them at http://anthill.com.
Automated Network Technologies
3333 South Bannock Street, Suite 945
Englewood, CO 80110 USA
Phone 303.789.2506
FAX 303.789.2438
Email hal@anthill.com
Subject: [3.10] NAStore {Brief}
From: Hierarchical Storage Management
NAStore is a Unix migrating file system developed by the Numerical
Aerodynamic Simulation program at NASA Ames. It is available through
NASA's software distribution agency, COSMIC. It currently runs only on
Convex with 34x0 cartridges and Storage Tek robots. Looks like a local
file system to users of the Convex. Available with source.
Info on NAStore can be found on the web at
http://chuck.nas.nasa.gov/NAStore/NAStoreQR.html
COSMIC's address is :
University of Georgia
382 East Broad Street
Athens, Georgia, 30602-4272, US
011-706-542-3265
service@cossack.cosmic.uga.edu
For more information on NAStore, contact John Lekashman, lekash@nas.nasa.gov.
(info from Bill Ross, bross@nas.nasa.gov, 94/9/15)
Subject: [3.11] DMF {Brief}
From: Hierarchical Storage Management
Cray Research's Data Migration Facility. The grandaddy of Unix HSM
systems. You can find info on DMF at
http://www.cray.com/product-info/sw/SM/DMF_flyer.html, or call +1 612
683 3897 or email crayinfo@cray.com. It's reportedly
running on more than 200 systems, and development is continuing. Large
users are in the hundreds of TB, with millions of files and >1TB/day
through DMF.
Information from:
"Storage Management at Cray Research, Inc", Metcalfe, D.J. and Thompson. D.
"Data Migration Facility Development Update", Lazatella, T.W. and Bannister, N.
Cray User Group, Barcelona, 1996, in press.
(Robert.Bell@dit.csiro.au, 1996/4/2)
Subject: [3.12] FileServ {Brief}
From: Hierarchical Storage Management
From E-Systems. Works with the E-Systems ER-90 (D-2) tape drive and
Odetics robots, as well as 3480 with the Storage Tek ACS 4400. Runs on
Convexes (only?). Supports multiple copies of files. Retrieves only
necessary info from tape to disk before completing request.
Reportedly no longer available on Convex, in beta test on SGI
(lily@access.digex.net, 10/95)
Now owned by EMASS, info at
http://www.emass.com/Products/Software_Products/FileServ/FileServ_Top.html.
Subject: [3.13] Cray Research's Open Storage Manager {Brief}
From: Hierarchical Storage Management
They have some agreement with Legent Corporation. OSM runs on Sparc
machines, including the Cray Superservers. Price ranges from $500 to
$5,000, which is very cheap for HSM. However, it might only be capable
of migrating among disks -- I don't see any mention of autochangers.
(rdv, 94/12/9)
Subject: [3.14] T-mass {None}
From: Hierarchical Storage Management
Subject: [3.15] HP OpenView OmniStorage
From: Hierarchical Storage Management
Supports multiple types of tertiary media (optical, tape) though it
seems to come originally from their work for their own MO jukeboxes.
Supports multiple types of clients. (info from Herbert Volk
<herbert@quirlie.bbn.hp.com>, 1995/9/28)
More info available at http://www.hp.com/go/openview. Now a very broad
storage management suite, covering lots of functionality for
management. Supports MO, DLT and 8mm as media, though only a limited
number of autochangers. (rdv, 98/1/16)
Subject: [3.16] Platinum NetArchive-HSM {Brief, New}
From: Hierarchical Storage Management
Used to be ASC (Advanced Systems Concept) before being bought by
Platinum. Runs on SunOS, HP, and Domain/OS. Supports numerous optical
jukeboxes. See http://www.platinum.com. (rdv, 96/4)
PLATINUM technology, inc.
1815 South Meyers Road
Oakbrook Terrace, IL 60181
1-800-442-6861 -or- 708/620-5000
e-mail: info@platinum.com
Subject: [3.17] Large Storage Configurations {Brief,New}
From: Hierarchical Storage Management
http://www.lsci.com describes their Solaris-based HSM product. Only one
computing platform, but a reasonably broad range of mid- to high-end
peripherals and robotics supported, from little Exabyte autochangers
to the IBM 3494 and STK silos. (rdv, 96/7/23)
Subject: [3.18] Unix HSM Vendor List
From: Hierarchical Storage Management
This list is adapted from _Client/Server Today_, Dec. '94, with some
of my own additions. All the phone numbers are USA (apologies to
international readers for the 800 numbers, but they're all I've got).
I don't know anything about some of these companies; I suspect some of
them work with HSM from other vendors rather than their own packages.
I've indicated on the list various reports of companies OEMing from
each other; this is not out of disrespect for the work involved in
OEMing/supporting or porting such complex software, but an attempt to
divide the HSM vendors into "families" with similar capabilities
(occassionally on very disparate platforms).
Vendor Product Contact
------ ------- -------
Advanced Archival Products AMASS (303)792-9700 *
Advanced Software Concepts (ASC) (619)737-9544
Alphatronix ASC (919)544-0001
Artecon ASC (619)931-5500
AT&T CommVault DataMigrator (908)935-8000
Automated Network Technologies (ANT) Metior (303)789-2506 *
Computer Associates International (800)225-5244
Computer Upgrade (808)874-8807
Convex Computer UniTree (214)497-3085 *
COSMIC (NAStore) (706)542-3265 +
Cray Research DMF (800)BUY-CRAY *
Digital Equipment (DEC) NETstor (800)344-4825
Dorotech (703)478-2260
Epoch Systems (508)836-4300 *
E-Systems FileServ ?*
File Tek Storage Machine (301)251-0600
Fujitsu Computer Products of America OSM (408)432-6333
Hewlett-Packard OmniStorage* ,NETstor (800)637-7740x8509
HIARC (714)253-6990
IBM UniTree (800)225-5426
Introl (612)788-9391
Large Software Configurations (LSC) (612)482-4535 *
Legent $OSM (703)708-3000
National Storage Lab (NSL) HPSS +*
NETstor (Cheyenne) $NETstor (612)890-9367
(OpenVision UniTree (510)426-6400 *)
Platinum NetArchive HSM (708)620-5000 *
Qstar Technologies (301)762-9800
Raxco (301)258-2620
Software Partners/32 (508)887-6409
Storage Technology (STK, StorageTek) (303)673-5151
T-mass ?
Tracer XFS ?
UniTree Software UniTree (510)833-9344 *
* = Info elsewhere in FAQ
+ = not commercial product
? = no contact info
$ = original developer (no mark indicates OEM)
Subject: [3.19] Mainframe
From: Hierarchical Storage Management
IBM also has HSM for MVS, called, imaginatively, HSM.
There is the storage home page. http://www.storage.ibm.com/storage/ I
have also found references to System Managed Storage SMS and HSM and
DFHSM (Data Facility Hierarchical Storage Manager) but could find no
online information. There are probably manuals like DFHSM Version 2
Release 5.0, General Information manual (GH35-0092) if you are a real
glutton for punishment and have a friend at ibm.
So we have ADSM and DFHSM and DFSMS and probably others. But not much
online information. Sorry.
A little searching from the http://www.ibm.com might turn up something
too.
(Del Cecchi, <dcecchi@VNET.IBM.COM>, 1996/3/27)
Subject: [3.20] PC & PC Server Oriented Packages
From: Hierarchical Storage Management
Subject: [3.20.1] HP Optical Jukebox Storage Solution
From: Hierarchical Storage Management
Netware 3.11 based, up to 10.4 Gigabytes, includes model 10LC optical
jukebox which has one drive and 16 disks each with 650 MB formatted capacity.
Hewlet-packard (Palo Alto, CA) 800/826-4111.
Subject: [3.20.2] Chili Pepper Software
From: Hierarchical Storage Management
A company from Atlanta, GA named Chili Pepper Software (404-339-1812)
and 3M have gotten together in some fashion to make HSM software for
PCs using QIC. (rdv, 94/9/5)
Subject: [3.20.3] Cheyenne ARCserve
From: Hierarchical Storage Management
Runs on Netware servers. Transparent to most clients, but has a neat
feature: if you use a special TSR and DLL on client PCs, when it has
to retrieve a file from secondary or tertiary storage, it can give you
an estimated retrieval time and the option to abort. (516)484-5110,
(800)243-9462.
(rdv,95/02/14)
Subject: [3.21] DATMAN {Brief}
From: Hierarchical Storage Management
Simple HSM for 4mm tape drives under MS-DOS. A limited freeware
version is available.
More info at http://www.datman.com.
Voice: 708-369-7112 Fax: 708-369-7113 (Kan Yabumoto,
yabumoto@datman.com, Nov. 1995)
Subject: [3.22] Windows NT
From: Hierarchical Storage Management
Try:
Avail Systems
4760 Walnut St
Boulder, CO 80301
voice: +1.303.444.4018
fax: +1.303.546.4219
dave_skinner@intellistor.com (Dave Skinner) (95/2/12)
Avail's product, NetSpace HSM, has been selected by Microsoft to be
incorporated into future versions of NT, and also provides a link
between NetWare and IBM's ADSM. NetSpace also runs on Novell NetWare
systems. See http://www.avail.com ("Wight, Risa" <risa@avail.com>,
95/10/17)
Subject: [3.23] Other Non-Unix HSM
From: Hierarchical Storage Management
DEC's old Tops-20 OS supported offline files, and would generate an
automatic request to the operator to mount a tape when the user
accessed the file. When you listed a directory, it would show you
which files were online and which off.
DEC's OpenVMS has some sort of support for this now. VMS 6.1 supports
"shelved" files.
There is also the product Virtual Branches, from Acorn Software, which
does HSM for MO and CD-ROM for OpenVMS.
Acorn Software, Inc.
267 Cox St.
Hudson, MA 01749
voice: (508)568-1618
fax: (508)562-1133
Internet: info@acornsw.com
Subject: [3.24] Tapes as Disks {Brief, New}
From: Hierarchical Storage Management
There are several packages around (mostly for PCs) that will let you
use a tape drive like a disk drive. Of course, it's _very_ slow
unless it uses some disk-based information as well.
See http://www.tapedisk.com for one such product. (rdv, 96/11/4)
Subject: [4] Backup Software
From: Backup Software
Backup software usually provides some form of management of files,
tapes, and autochangers. Retrieval of files is not automatic (as in
true HSM). These are designed to allow you to recover from disk or
file system failures, and to recover files accidentally (or
maliciously) deleted or corrupted. Some work in conjunction with HSM
systems, which are often vulnerable to the latter class of problems.
I've concentrated here on backup software that supports various
autochangers, as this is of more interest to people in this group than
standalone software for backing up one hard disk onto one tape.
Subject: [4.1] PC-Oriented Backup Packages
From: Backup Software
I don't think any of the PC operating systems come with tape support
built in, so you have to have some 3rd party software to work with
tape. This short list is primarily oriented toward PC servers.
It's partly derived from _PC Magazine_, March 29, 1994, pp. 227-272.
Note that there has been an ongoing discussion of the pitfalls of
Windows 95 and third-party backup software; many in particular are
having trouble with long file names.
Arcada Software - Storage Exec. (NT)
Avail (NT)
Cheyenne Software - ArcServe (Netware)
Conner Storage Systems - Backup Exec (Netware)
Emerald Systems - Xpress Librarian
Fortunet - NSure NLM/AllNet
Hewlett Packard - OmniBack II (NT)
Legato - NetWorker (Netware)
Mountain Network Solutions - FileSafe
NovaStor (Netware)
Palindrome - Network Archivist (Netware, OS/2, Windows)
Palindrome -Backup Director
Performance Technology - PowerSave (Netware)
Systems Enhancement - Total Network Recall
Arcada is at 800/327-2232 and at http://www.arcada.com.
{Under Construction}(SHMO)
Subject: [4.2] Unix Packages
From: Backup Software
Some people claim "Unix tape support is an oxymoron," so there's a big
market in outdoing tar, dump, dd and cpio.
APUnix - FarTool
Cheyenne - ArcServe (see under PCs, above)
Dallastone - D-Tools
Delta MicroSystems (PDC) - BudTool
Epoch Systems - Enterprise Backup
IBM - ADSM (Adstar Distributed Storage Manager)
Hewlett Packard - OmniBack II
Legato - Networker
Network Imaging Systems
Open Vision - AXXion Netbackup 2.0 Software http://www.ov.com/product/nb.html
Software Moguls - SM-arch
Spectra Logic - Alexandria
Workstation Solutions
{Under Construction}(SHMO)
Subject: [4.2.1] Spectra Logic Alexandria
From: Backup Software
Spectra Logic makes 4mm & 8mm autochangers, but this software supports
other autochangers as well. Has a nice feature that it claims to be
capable of backing up live Oracle, Informix and Sybase databases.
email alexandria@spectra.wali.com. (rdv,95/2/14) On the web at
http://www.spectralogic.com
Subject: [4.2.2] ADSTAR Distributed Storage Manager
From: Backup Software
Runs on everything from OS/2, AIX and OS/400 to VSE/ESA, MVS and VM
providing backups for virtually everything you can think of in PCs and
workstations. (800)IBM-3333 or anonymous ftp to index.storsys.ibm.com.
(rdv,95/2/14) http://www.storage.ibm.com/storage/software/software.htm
or http://www.storage.ibm.com/storage/hardsoft/software/html/adsmhome.htm.
Subject: [4.2.3] NetWorker
From: Backup Software
Backup software. See http://www.legato.com. Runs on a wide variety of
platforms and supports a bunch of types of autochangers.
Legato Systems, Inc.
3145 Porter Drive
Palo Alto, CA 94304
Phone: 415-812-6000
Fax Number: 415-812-6032
Fax-on-demand: 415-812-6156
Subject: [4.2.4] BudTool {Brief}
From: Backup Software
PDC Engineering
111 Lindbergh Avenue
Suite C
Livermore, CA 94550 USA
(510) 449-6881
FAX (510) 449-6885
See http://www.pdc.com.
Subject: [4.2.5] HP OmniBack II {Brief, New}
From: Backup Software
HP's OmniBack II runs on several different platforms, and splits the
functionality up. The Backup manager appears to run only on NT, but
it can use devices attached to various flavors of Unix, and backs up
ten different kinds of Unix and PC clients. Now marketed jointly with
OmniStorage, their HSM system, in a (sales) program they call
OpenView. See http://www.hp.com/go/openview. (rdv, 98/1/16)
Subject: [4.2.6] Workstation Solutions {Brief}
From: Backup Software
See http://www.worksta.com. Runs on a variety of Unix platforms, and
supports a reasonably broad range (20GB-5TB) of autochangers and tape
systems (4mm, 8mm, DLT, VHS). (rdv, 96/7/8)
Subject: [4.2.7] Amanda {Brief, New}
From: Backup Software
Subscribe to amanda-hackers-request@cs.umd.edu and
amanda-users-request@cs.umd.edu for some time. The "current"
distibution of Amanda seems to be from ftp://ftp.gps.caltech.edu/pub/,
with version 2.3.0.3. A very good backup system, with no dollar
investment. (David Olsen, <olsen@1-avd2.ds.boeing.com>, 1/23/97)
You'll also find a FAQ on it at
http://ugrad-www.cs.colorado.edu/~teich/amanda.
Subject: [4.2.8] Remote Backup or Mirroring {Brief, New}
From: Backup Software
It's now possible, in several fashions, to backup systems over a
network or even a modem, for recovery from fires and even disk
crashes.
Channel extenders, such as the CHANNELink
http://www.cnt.com/products/clnk/clnk2.htm from CNT and the Symmetrix
Remote Data Facility http://www.emc.com/symmdoc.htm, are used by some
mainframe systems to create remote copies of disks (remote mirroring)
as a disaster recovery measure. Early systems used dedicated fibre or
telephone lines and ran proprietary communications protocols. Newer
systems from CNT are capable of communicating over general-purpose
wide-area networks, thus saving the costs of the dedicated lines.
It's also possible to backup PCs over your modem in an incremental
fashion, through your ISP; one example is http://www.telebackup.com.
Two other companies that do this over the Internet (out of, I believe,
more than 30) are Connected Corp., Framingham, MA; Virtual Technology
Corp., Minneapolis, MN.
Subject: [5] Tape and Autochanger Management Software
From: Tape and Autochanger Management Software
This category of software can overlap with both HSM and backup, above,
and basic tools are often available from the autochanger hardware
vendors, below. New additions to this category welcome -- I'm sure
there are numerous vendors I don't know. Functionality varies widely,
from rudimentary "move the cartridge" interfaces to sophisticated
tape-tracking databases. (rdv, 95/6/1)
Subject: [5.1] REELlibrarian
From: Tape and Autochanger Management Software
Actually a whole set of software tools from Storage Tek, available
through Software Clearing House, http://www.sch.com/stor001.html.
Manages different types of media for you, including 3480 in STK silos,
under MVS or Unix. (rdv from esj@atlas.sch.com, 95/6/1)
Subject: [5.2] ANT Medium Changer
From: Tape and Autochanger Management Software
There is a public version of a Solaris 2.x Medium Changer driver
with a set of command line utilities in our FTP server.
Only restriction is that you cannot bundle it with another product
or resell it (intended for end-user use only).
ftp://anthill.com/pub/distrib/mc/solaris2.x/
or http://anthill.com/techsupport.html (Tim Sesow, ANT, 1995/9/21)
Subject: [5.3] Tapes 3000 {Brief}
From: Tape and Autochanger Management Software
Tapes3000 is a UNISON/TYMLABS product that puts a label on a reel
tape, DDS, any kind of media storage and adds it to a "tapes database"
so you do not have to manually log and label backup tapes or special
request tapes, and possibally make a mistake. You can also use this
for unlabeled media, but then you would have to manually log the
media. You are able to set a "dataset" for differrent retentions
(Generations, weekly, monthly, daily etc). Then when those criteria
are met the program will automatically scratch those tapes, then you
run a report and it will give you a list of what scratched for that
day, week, or whatever time you want to run the job.
Tapes3000 is part of a package that you can receive called MAESTRO
which is a job scheduler program. Not an autochanger control package,
just tape management software. (R Johnson
<rljohn@lss2.labsafetysupply.com>, 1996/3/25)
Subject: [5.4] Others
From: Tape and Autochanger Management Software
Many of the HSM (including EMASS, above, with their VolServ) and
backup vendors also sell simple autochanger control interfaces. Check
with them.
Some things I've read indicate that one or more of the
university-based projects ought to have a freely available autochanger
controller; if anybody has any info on this let me know.
Subject: [6] Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
I use the term "robotics" to refer to access to multiple removable
volumes by a fewer number of drives without a person. This includes
sequential stackers, as well as random access robotics.
A stacker typically is capable of taking (literally) a stack of tapes
and putting them into the drive one at a time, in order. No random
access to specific tapes, as with a full-function autochanger.
Stackers typically are limited to 8-10 cartridges, and are used by
people whose backups have exceeded the size of one cartridge.
In the larger media formats, such as D-1, D-2, Betacam, etc., the
traditional manufacturers of broadcast autochangers, such as Asaca,
Odetics, Sony, etc. have products that are easily adaptable to storage
use.
The August 1996 issue of Byte magazine has an article comparing 12
tape autochangers. It is a little misleading, not mentioning any of
the truly large library systems, and only one midrange, whose capacity
is quoted assuming DLT 7000 tape drives, which is never mentioned. In
addition, much of their testing is more related to the drives than the
autochangers.
Subject: [6.1] 8mm {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.1.1] Exabyte {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Phone: 800/EXABYTE, 1685 38th st, Boulder, CO 80301, Fax 303/447-7689.
On the web at http://www.exabyte.com.
Subject: [6.1.1.1] EXB-10h
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Current model, 10 cartridges, one drive. Not Mammoth compatible. 70
GB, uncompressed. (rdv,96/8/29).
Subject: [6.1.1.2] EXB-210
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
2 drives, 11 cartridges, not Mammoth compatible.
Subject: [6.1.1.3] EXB-220
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
2 drives, 20 cartridges, Mammoth compatible (rdv,96/8/29).
Subject: [6.1.1.4] EXB-440/480
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
40 or 80 cartridges, 2 or 4 drives, Mammoth compatible. 1.6 TB
uncompressed, with Mammoth. (rdv,96/8/29).
Subject: [6.1.1.5] EXB-10
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Ten cartridges, one full-height drive. Original 10 cartridge robot.
No robotic intelligence, when one tape comes out, the robot mounts the
next tape in sequence (i.e. a kind of stacker). Button selectable to
loop back to the first tape or to stop at the end. Discontinued.
Subject: [6.1.1.6] EXB-10i
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Ten cartridges, one full-height drive. Released
shortly after the EXB-10. Includes SCSI attachment to robotics. Now
nearly replaced by the EXB-10e. Discontinued.
Subject: [6.1.1.7] EXB-10e
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Ten cartridges, one full-height drive. Announced around 4/93.
Includes better control panel and display than EXB-10i. Drive mounted
horizontal and tape magazine at slight angle (rather than vice-versa
in EXB-10i). Discontinued.
Subject: [6.1.1.8] EXB-120
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Holds 120 8mm cartridges, up to four full-height drives.
Discontinued.
Subject: [6.1.2] ADIC {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
I'm not sure if ADIC manufactures or OEMs their robotics, but they
apparently sell to end users. They have 8mm, 4mm and DLT autochangers
in a variety of small to medium sizes, up to about a terabyte. See
http://www.adic.com. (rdv,97/3/18)
Subject: [6.1.3] Storage Tek (was Lago) DataWheel {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Holds 54 8mm tape cartridge in a carousel with 2 8mm drives. The
carousels are removable. Now Storage Tek, used to be a small company
called LAGO, which apparently no longer exists.
You'll find info at: http://www.stortek.com/StorageTek/9708.html. (rdv,
updated 1996/3/22)
Subject: [6.1.4] ACL {None}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.1.5] Cambridge On-Line Storage {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Sixty and 240 GB libraries, 713/981-3812
Subject: [6.1.6] Spectra Logic {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Spectra Logic makes SCSI-controlled 8mm and 4mm (DAT) autochangers.
One to four drives, with 20 to 60 slots. Capacity currently up to 600
GB of DDS-2 (4mm) or 300 GB 8mm. Early models (STL-6000 & STL-8000)
were a rotating carousel. Newer ones use an arm and the tapes don't
move.
Supported by a variety of software vendors. List prices of $9K
(Spectra 4000/20 slots, one DDS-2 drive) to $31K (60 slots with four
drives and barcode support) including drives.
They also make a thing called TapeFrame, which consists of several of
their autochangers working in conjunction, with capacities up to 2.2
TB.
U.S.: 1-800-833-1132 or 303-449-6400
(Britt Terry, britt@spectra.wali.com, 95/1/12)
See also under backup software, and on the web at
http://www.spectralogic.com.
Subject: [6.1.7] Qualstar {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Makes 8mm libraries that hold 10 to 120 cartridges and 2 to 6 drives.
tel:(818)592-0116 fax:(818)592-0061 or http://www.qualstar.com
(rdv,95/2/14)
Our TLS-4000 8mm library family now supports the Sony SDX-300C drive.
Production shipments have started and enduser installations have
occurred. Early field reports are completely positive.
TLS-4000 also supports Exabyte Mammoth and 8505XL drives.
(Bob Covey, covey@qualstar.attmail.com, 96/10/22)
Subject: [6.2] 3480
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.2.1] StorageTek {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Storage Tek makes huge autochangers, referred to as silos, round and
several (~5) meters in diameter. They hold 6,000 3480-style tapes. At
original 3480 densities, that's only 1.2 TB per silo, but capacities
have gone up to (I think) 800 MB/cartridge, and are poised for a HUGE
jump if Storage Tek gets their Redwood tape drive finished (in beta
test, 12/94), up to 20 GB/cartridge, 120 TB/silo.
There is a smaller silo, known as WolfCreek, that holds 500-1000
cartridges.
STK also OEMs a 3480 autochanger from Odetics. Holds ~260 cartridges,
I think, in rotating drum, with room for ?2? tape drives above it.
(rdv,95/1/12) However, I couldn't find any info about this on the web
site.
They also have a web site at http://www.stortek.com. (95/5/16,rdv)
All but the Odetics (known as Ocean, I think) are Redwood-compatible.
The new 9710 (codenamed Panther) can handle both DLT and 3480
cartridges in a mini-tower. (quodlingp@cim.alcatel.oz.au, 1996/3/12)
Subject: [6.2.2] EMASS (was GRAU) {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Grau, a German manufacturer, makes high-end, very large capacity
mixed-media autochangers known as the ABBA series, targetted I believe
primarily at the IBM mainframe market. (rdv,94/11/7)
Bought by EMASS, see http://www.emass.com. They support 3480, D-2, MO,
VHS, DLT, 8mm all in one robot, so they renamed the autochanger series
the AML, Automated Mixed-Media Libraries.
Subject: [6.2.3] 3590 (Magstar,NTP) {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
MountainGate has announced that they will have, later this year, a
300-cartridge autochanger.
IBM of course makes numerous autochangers for NTP; the 3494 and 3495
models both support it. More info at
http://www.storage.ibm.com/storage/hardsoft/tapls.htm. (They probably
have smaller libraries, too.)
Word in the newsgroup has it that STK robots won't support Magstar due
to the rivalry between IBM and STK.
(rdv,1996/3/12)
Subject: [6.3] 4mm {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.3.1] Cambridge On-Line storage {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Libraries of 120 and 40 GB, 713/981-3812
Subject: [6.3.2] Spectra Logic {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Spectra Logic makes SCSI-controlled 8mm and 4mm autochangers. See
above under 8mm autochangers.
Subject: [6.3.3] HP 4mm {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
I think HP makes their own 4mm autochangers.
Subject: [6.3.4] Storage Tek Datawheel {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
The 4mm version. 25 cartridges, so up to 100GB uncompressed. Info at
http://www.stortek.com/StorageTek/9704.html.
Subject: [6.3.5] Diverse Logistics Libra {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Two libraries, the Libra-8 and Libra-16, with 8 or 16 slots (32 or 64
GB uncompressed) and one DAT drive. Info at http://www.dilog.com or
http://www.dlig.ch (Europe), info@dliog.com or
info@dilog.ch. (schaefer@dilog.ch (Marc SCHAEFER), 96/8/6)
Subject: [6.3.6] Qualstar {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
See http://www.qualstar.com.
We are now shipping our TLS-2000 4mm tape library family. This product
line consists of 6 models ranging from 1-2 drives with 18 tapes, to 1-4
drives with 144 tapes. All units include a mailbox and barcode support. I
believe that the TLS-24144 is the largest 4mm library in production.
TLS-2000 supports Seagate, Sony and HP DDS-2 drives and we are about to
start testing the Sony SDT-9000 DDS-3 drive.
(Bob Covey, covey@qualstar.attmail.com, 96/10/22)
Subject: [6.3.7] ADIC {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
I'm not sure if ADIC manufactures or OEMs their robotics, but they
apparently sell to end users. They have 8mm, 4mm and DLT autochangers
in a variety of small to medium sizes, up to about a terabyte. See
http://www.adic.com. (rdv,97/3/18)
Subject: [6.4] VHS {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.4.1] MountainGate (was Metrum)
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Metrum's data storage division was bought by Lockheed Martin and
renamed MountainGate.
Autochangers for their VHS-based high-capacity (20GB, 2 MB/sec.) tape
drive. They now have a stacker available for standalone drives.
Library of 960 GB (RSS-48b) holds 2 drives and 48 cartridges in a
rotating drum.
Library of 12 TB (RSS-600b) holds 5 drives and 600 cartridges in less
than 20 square feet of floor space. The tapes are held in rotating
drums on each side, with the drives in a rack in between.
OEMs through Convex, IBM, and a host of resellers. Integrated with
various backup and HSM packages, including UniTree from Convex & IBM,
and AMASS from AAP.
See MountainGate also under MO and DLT autochangers.
MountainGate
A Lockheed Martin Company
9393 Gateway Drive
Reno NV
89511-8910
702-851-9393 Phone
702-851-5533 Fax
See them on the web at http://www.mountaingate.com, but as of today
(1996/3/19) doesn't have much on the high-end products.
Subject: [6.5] Digital Linear Tape (DLT) (Quantum) {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
T* names are DEC's names, DLT2* names are OEM names.
Subject: [6.5.1] TZ877 {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
One TZ87 tape drive, 7 cartridges, each 10GB native
Presumed to be the same as the DLT2700 library.
Ref: Digital's Customer Update, March 14, 1994
Subject: [6.5.2] TL820 {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Holds 3 TZ87 tape drives, 264 catridges, five libraries attachable
Presumed to be Odetics made (714/774-5000)
About $150K U.S.
Ref: Digital's Customer Update, March 14, 1994
Subject: [6.5.3] MountainGate
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
At Comdex '94 in Vegas, Metrum (now MountainGate) introduced the D-900
(900 cartridges, up to 20 drives, 9TB uncompressed for DLT-2000) and
D-360 (360 cartridges, up to 8 drives, 3.6 TB uncompressed for
DLT-2000) DLT autochangers. There is an expansion unit with 480
cartridges which may hold two drives. Up to eight D-360 or D-480 units
can be connected via passthrough. They also introduced 28 and 60
cartridge DLT autochangers. Customer shipments starting in early '95.
See above under VHS for contact info.
Subject: [6.5.4] Breece Hill {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Breece Hill makes two small (28 and 60 cartridges) DLT autochangers.
On the web at http://www.csnet.net/breece_hill/
Breece Hill Technologies, Inc.
6287 Arapahoe Avenue
Boulder, Colorado 80303 USA
For more Information 1-800-941-0550 or 303-449-2673
Subject: [6.5.5] Odetics {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Odetics makes a series of DLT libraries that hold, in the basic
configuration, 3 DLT drives and 264 cartridges. See
http://www.odetics.com
Subject: [6.5.6] MediaLogic ADL
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
MediaLogic ADL, Inc.
1965 57th Court
Boulder, CO 80301
Voice: 303-939-9780
Fax: 303-939-9745
email: adlinfo@adlinc.com
They have desktop autochangers up to 26 DLT cartridges. See also
http://www.csn.net/adlinfo/ on the web. Also have 4mm and 8mm
autochangers that are similar. I don't know if they manufacturer these
or OEM them. (rdv, 1996/3/19)
Subject: [6.5.7] ADIC {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
I'm not sure if ADIC manufactures or OEMs their robotics, but they
apparently sell to end users. They have 8mm, 4mm and DLT autochangers
in a variety of small to medium sizes, up to about a terabyte. See
http://www.adic.com. (rdv,97/3/18)
Subject: [6.6] D-2
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.6.1] Ampex
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Ampex makes their own autochangers for the DST DD-2 tape drive (see
part 1 of the FAQ).
DST 410 Automated Cartridge Library:
Up to 1.2 terabytes capacity (uncompressed) in 7 square feet of floor space.
All 3 cartridge (cassette) sizes supported - 25, 75, 165 gigabytes
(uncompressed).
SCSI Medium Changer Commands or Ethernet NetSCSI protocol.
Console mounted configuration.
Single unit price: $150K.
DST 810 Automated Cartridge Library:
Up to 6.4 terabytes (uncompressed) in 21 square feet of floor space.
Robotic performance of 600 cartridge exchanges per hour.
Average access time to any file less than 30 sec. (including cartridge
exchange, drive load and search to data).
1 to 4 tape drives per library.
Ethernet NetSCSI protocol robotics control.
Starting single unit price: $300K.
(pete_zakit@ampex.com, 94/12/23)
Subject: [6.6.2] Odetics
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Odetics makes a thing called a DataTower that holds ~250 S-size D-2
cartridges. It used to be, but is no longer, sold through EMASS for
use with the ER-90 (the Ampex/EMASS D-2 drive). It's a small silo that
sits in front of one rack of drives.
They also make an expandable library known as the DataLibrary, with a
maximum capacity of ten petabytes(!) (ten million gigabytes). A robot
handler runs on a track down an aisle lined with cartridges, and tape
drives at one (both?) end(s) of the aisle. I think the aisles can vary
in length, and they can be lined up next to each other and I believe
cartridges will pass between them.
(Note: since their acquisition of GRAU (above) EMASS no longer sells
Odetics. I don't know if these are still available directly from
Odetics and who you'd get to do the integration work. (rdv, from
Dave.Barnes@fox.emass.com (Dave Barnes), 1996/3/22))
Subject: [6.7] ID-1
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.7.1] Sony DMS, PetaSite {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Sony sells three autochangers for their ID-1 line of tape drives,
based on their broadcast line of autochangers. These are known as the
DMS Series, models 24, 300M, and 700M. Not surprisingly, they hold,
respectively, 24 (S,M, or L cassettes), 300 (M only) and 700 (M only)
cassettes for capacities of 2.3, 13, and 30 terabytes.
They have also announced something called PetaSite, which they claim
expands to 3 petabytes and supports both ID-1 and DTF in a single
system.
(rdv, 1996/3/22)
Subject: [6.8] Optical Disk (MO,WORM) Libraries
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Several other Japanese manufacturers make optical libraries, I think,
mostly in support of their own drives. (SHMO)
Subject: [6.8.1] Hitachi 448 GB optical library
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
12-inch worm, up to 7GB per platter, 2-4 drives, additional cartridge
expansion unit increases capacity 560 GB to 1,008 GB.
Drive rates up to 2.22 MB/sec.
Phone: 800/HITACHI
Subject: [6.8.2] HP MO Autochangers
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Makes several models, from 16 disks and one drive up to 144 disks and
?4? drives. These are very popular.
Subject: [6.8.3] Maxoptix MO Autochangers
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Makes several models in the MaxLyb series, the 52, 120 and 180, which
correspond to the capacity in gigabytes for 1.3 GB drives. They hold,
respectively, 2 (52), 2 or 4 (120) and 2, 4 or 6 (180) drives.
They also have a fairly mysterious thing called the Axxis^26, a "high
speed network file retrieval & backup server," which is obviously an
MO autochanger, apparently bundled with a license for Palindrome
Backup Director, suitable for attaching to your Netware file server?
tel: (408)954-9700, (800)848-3092
fax: (408)954-9711
(rdv,95/02/14)
Subject: [6.8.4] MountainGate {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Now has the OSS-626, which holds 450-626 disks and 2-24 full-height HP
drives. Also a new expandable multi-chassis autochanger similar to the
D-360 DLT autochanger is available.
See above under VHS for contact info.
Subject: [6.8.5] DISC DocuStore {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Makes large libraries (up to ~1,000 5.25" MO catridges, 2.6TB for
standard MO or 4.6TB for non-standard); see
http://www.discjuke.com. (Stephen Fister <fister@Synopsys.COM>, 96/8/7)
Subject: [6.8.6] Kodak {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Kodak makes their own autochangers for their large (?12"?) optical
drive.
Subject: [6.8.7] Sony {Brief}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Sony makes their own jukeboxes for their 12" WORMs and for 5.25" MO.
http://www.sel.sony.com/SEL/ccpg/storage/scontent.html is the place to
start.
Subject: [6.9] CD-ROM Jukeboxes
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Subject: [6.9.1] Pioneer
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
From: mc@msss.com (Mike Caplinger)
Subject: driver software for Pioneer DRM-5004X CDROM jukebox
Date: Tue Aug 23 10:09:00 PDT 1994
Organization: UTexas Mail-to-News Gateway
Lines: 13
Pioneer recently announced their DRM-5004X CDROM jukebox, which has
four quad-speed drives and holds 500 CDs for under $20,000.
Mike Caplinger
mc@msss.com
Pioneer also has a 6-disk mini-changer, where SCSI LUNs 0-5 correspond
to the individual disks; accessing one causes a mount. (Brian A Berg
<bberg@bswd.com>, 1996/3/29)
There's also an 18-disk model. You can find info on all three at
http://www.pgb.pioneer.co.uk (rdv, 96/8/5)
Subject: [6.9.2] CyberTower {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
http://www.cyberdatasys.com has frustratingly little info on a product
that apparently is 7 CD-ROM drives made to behave like a single SCSI
target. Not really an autochanger, more of an array. Not sure who the
manufacturer is; the same unit is available from Procom
http://www.procom.com. (rdv,96/8/5)
Subject: [6.9.3] NSMJukebox {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
http://www.nsmjukebox.com describes what they call "the universe's
fastest CD-ROM jukebox". 150 platters (90GB), up to four drives. (rdv,
96/8/5)
Subject: [6.9.4] Nakamichi {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
A 4-disk changer built on into an 8x
reader. http://www.nakamichicdrom.com. (rdv,96/8/5)
Subject: [6.9.5] CDI Juke Box Library {Brief,New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
A 28-disk changer (standalone network server?) with up to four drives,
and a built-in PC w/ 128 MB RAM and a 1GB disk. Available from
http://www.cdstorage.com. (rdv,96/8/5)
Subject: [6.9.6] K & S M-200 {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
A 200-disk autochanger. Available from
http://www.cdstorage.com. (rdv,96/8/5)
Subject: [6.9.7] DISC {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
Makes large libraries (up to ~1,500 media slots and up to 32 drives);
see http://www.discjuke.com. (Stephen Fister <fister@Synopsys.COM>,
96/8/7)
Subject: [6.9.8] Meridian {Brief, New}
From: Robotics (Autochangers, Jukeboxes, Stackers, Libraries)
CD Net Universal Server from Meridian, http://www.meridian-data.com.
Not really an autochanger, but an array of CD-ROM drives in a box with
an NFS or Netware interface. (rdv, 97/6/30)
Subject: [7] File Systems
From: File Systems
This topic is also discussed frequently in comp.os.research.
See http://www.maths.tcd.ie/scrg/os-faq.html.
Subject: [7.1] NFS {Brief}
From: File Systems
The Network File System, originally developed by Sun Microsystems and
now pretty standard in the Unix world, and clients exist for PC, Mac,
VMS, and other non-Unix OSes. V2, the common version, supports single
files only up to 2^32 (4GB) bytes. I'm not sure if there are any
limits to a file system size under NFS, other than those imposed by
the client and server OSes (SHMO).
NFS is defined in RFC 1094. V3 is now RFC 1813.
There is at least one newsgroup devoted specifically to NFS:
comp.protocols.nfs.
Subject: [7.1.1] NFS V3
From: File Systems
NFS V3 supports 64-bit files and write caching.
The first implementation was from Digital with DEC OSF/1 V3.0 for
Alpha AXP. Silicon Graphics supports it on IRIX 5.3. Cray will support
it on UNICOS 9. I don't know about other vendors but I have heard
rumours that the releases coming in the second half of 1995 will
support it.
Further information on NFS V3 can be found from
gatekeeper.dec.com:pub/standards/nfs/NFS3.spec.ps.Z
(jmaki@csc.fi, 95/1/22)
Solaris 2.5, available Nov. 95, is reported to have V3 support.
Network Appliances have it as of 3.0, Sept. 95. (guy@netapp.com (Guy
Harris), 95/10/6)
Subject: [7.2] AFS {Brief}
From: File Systems
The Andrew File System (SHMO). Allows naming of files worldwide as if
they were a locally-mounted FS (from cooperating clients, of course).
There's an "alt" group for AFS - "alt.filesystems.afs". Available
commercially from Transarc.
Subject: [7.3] DFS {Brief}
From: File Systems
Another remote file system protocol that supports large files. I don't
know anything about it, or if any implementations really exist yet.
Subject: [7.4] Log based file systems
From: File Systems
Further Information:
%z InProceedings
%K hpdb:Rosenblum91
%s golding@cis.ucsc.edu (Thu Oct 17 11:12:07 1991)
%A Mendel Rosenblum
%A John K. Ousterhout
%y UCBCS.
%T The design and implementation of a log-structured file system
%C Proc. 13th SOSP.
%c Asilomar, Pacific Grove, CA
%p ACM. SIGOPS
%D 13 Oct. 1991
%P 1 15
%x This paper presents a new technique for disk storage management
%x called a log-structured file system. A log-structured file system
%x writes all modifications to disk sequentially in a log-like
%x structure, thereby speeding up both file writing and crash
%x recovery. The log is the only structure on disk; it contains
%x indexing information so that files can be read back from the log
%x efficiently. In order to maintain large free areas on disk for
%x fast writing, we divide the log into segments and use a segment
%x cleaner to compress the live information from heavily fragmented
%x segments. We present a series of simulations that demonstrate the
%x efficiency of a simple cleaning policy based on cost and benefit.
%x We have implemented a prototype log-structured file system called
%x Sprite LFS; it outperforms current Unix file systems by an order of
%x magnitude for small-file writes while matching or exceeding Unix
%x performance for reads and large writes. Even when the overhead for
%x cleaning is included, Sprite LFS can use 70% of the disk bandwidth
%x for writing, whereas Unix file systems typically can use only
%x 5--10%.
(tage@cs.utwente.nl)
Also, these papers:
Ousterhout and Douglis, "Beating the I/O Bottleneck: A Case for Log-
structured File Systems", Operating Systems Review, No. 1, Vol. 23, pp.
11-27, 1989, also available as Technical Report UCB/CSD 88/467.
Rosenblum and Ousterhout, "The Design and Implementation of a Log-
Structured File System", ACM SIGOPS Operating Systems Review, No. 5, Vol.
25, 1991.
Seltzer, "File System Performance and Transaction Support", PhD Thesis,
University of California, Berkeley, 1992, also available as Technical
Report UCB/ERL M92.
Seltzer, Bostic, McKusick and Staelin, "An Implementation of a Log-
Structured File System for UNIX", Proc. of the Winter 1993 USENIX Conf.,
pp. 315-331, 1993.
listed from the man page for mount_lfs under FreeBSD-2.1.5. (rdv, 97/1/17)
Subject: [7.5] Mainframe File Systems
From: File Systems
The WWW FAQ contains some information about mainframe file systems.
Subject: [7.6] Parallel System File Systems
From: File Systems
This discussion comes up occassionally on comp.arch and
comp.os.research. I don't know which newsgroups/mailing lists the PIO
(Parallel I/O) people hang out in, but it doesn't seem to be here.
They show up occassionally in comp.sys.super and comp.parallel. They
do have their own conferences, though.
The important work seems to be going on with the supercomputing gang
-- LLNL, CMU, Caltech, UIUC, Dartmouth, ORNL, SNL, etc. Work is also
being done by the parallel database community, including vendors such
as Teradata.
A paper presented at the ACM International Supercomputing Conference
in 1993 showed what to me seemed to be pretty appalling performance
for reading data and distributing it to multiple processors on an
Intel Delta supercomputer (sorry I don't have the reference in front
of me). (rdv, 94/8/12) The paper is old, now, and the Intel guys say
they have improved performance to up to 130 MB/sec. on the new Paragon
using their Parallel File System (PFS).
There is an excellent web site on parallel I/O at Dartmouth:
http://www.cs.dartmouth.edu/pario.html
There is also a mailing list housed at Dartmouth,
parallel-io@dartmouth.edu.
The annual conference is I/O in Parallel and Distributed Systems
(IOPADS); 1997's is co-located with Supercomputing '97 in San Jose,
Nov. 17. Papers are due March 25, 1997. See
http://www.cs.dartmouth.edu/iopads.
Subject: [7.7] Microsoft Windows NT {Brief}
From: File Systems
I seem to recall that NT supports 64-bit file systems for its own
native file systems? Anybody know for sure (SHMO)? (rdv, 94/8/24)
From *Inside the Windows NT(TM) File System*, by Helen Custer:
"NTFS allocates clusters and uses 64 bits to number them,
which results in a possible 2^64 clusters, each up to 4KB. Each
file can be of virtually infinite size, that is, 2^64 bytes
long."
"Clusters" can be between 512 and 4K bytes.
The Win32 API supports 64-bit file sizes, albeit in a cheesy fashion
reminiscent of V6 UNIX - no 64-bit integral types used, just pairs of
32-bit integral types. (guy@netapp.com (Guy Harris), 95/10/6)
Subject: [7.8] Large Unix File Systems
From: File Systems
There is now an industry group working on standardizing an API for
files larger than 2 GB (the max size normally supported on most Unix
systems). More info as I get it. The WWW-enabled can have a look at
http://www.sas.com:80/standards/large.file and see the various
proposals on the table.
Note that it is VERY easy to confuse whether an OS supports _files_
larger than 2 GB or _file systems_ larger than 2 GB. My table lists
some of both (thanks to ben@rex.uokhsc.edu (Benjamin Z. Goldsteen),
Ed Hamrick (EdHamrick@aol.com) and Peter Poorman (poorman@convex.com)
for much of this information).
It is straightforward for systems with 64-bit integers to support
64-bit files; for systems with 32-bit integers it is more complex. On
most 32-bit systems the offsets passed around inside the kernel (most
importantly, at the VFS layer) the file offsets and sizes tend to be
passed as 32-bit (signed) integers, meaning no files >2^31.
On most systems, the argument to lseek is of type off_t, which (on
SunOS and Linux, and plausibly on OSF/1 and others) is declared in a
header file as "typedef long off_t;".
For clients to really have access to large files, three pieces are
required: local FS support, an appropriate network protocol, and
server support for 64-bit FSes. For FTP access, I believe _literally_
inifinitely large files are possible, but I'm not sure(SHMO). For NFS
access, NFS V2 supports only 2GB files. NFS V3, just becoming
available now, supports full 64-bit files, I believe (anybody have a
reference to the docs? RFC? SHMO). With the notable exception of
Unitree (which does not use, depend on, or appear as, a local FS on
the server), server support for 64-bit files is provided only when the
server's own local FSes are 64-bit.
Even for the systems that _do_ support large files, not all are
programmer or user-transparent for supporting large files. UniCOS is,
OSF/1 is, ConvexOS is not (there are two system calls, lseek and
lseek64, with 32-bit and 64-bit file offsets, respectively, though the
Fortran interface is transparent).
This brings up the related issues. A complete large files implementation
needs not only the system calls, but also the stdio library and the runtime
libraries for the languages (Fortran, Cobol,...). Further, system utilities
(sed, dd, etcetera) need to be capable of dealing with large files.
(It has been pointed out that the GNU C compiler runs on most of these
machines, so it is possible to use "long long" as a 64-bit int on
them, but what matters for file systems is the system compiler.)
Here's the start of a table on these. Really such a simple table can't
do the problem justice, but it'll give you an idea. Keep in mind that
many of these systems support many file system types; I've listed only
the most interesting so far from this point of view. I'd like to flesh
it out more completely, though.
1 GB = 2^30, 1 TB = 2^40, 1 PB = 2^50, 1 EB = 2^60
NYR = Not Yet Released
OS/hardware 64-bit C max max NFS info
datatype par- file V3 updated
tition size sup
size (bytes)
UniCOS (Cray vector) int, long ? 8 EB? ? 8/94
ConvexOS long long 1 TB 1 TB N 9/94
Alpha AXP OSF/1 V3.0
AFS long 128 GB 16 TB 8/94 9/94
Paragon OSF/1 ? 8 EB* 8 EB N 2/95
UTS (Amdahl) ? ? 8 EB? ? 8/94
HP/UX 9 (HP 9xxx) ? 4 GB ? ? 8/94
Silicon Graphics
IRIX 5.2 EFS long long 8 GB 2 GB N 9/94
IRIX 6.0 EFS long 8 GB 2 GB N 9/94 (NYR)
IRIX 5.3 XFS ?long long? ? ?TB? Y 9/94 (NYR)
AIX (IBM RS/6000)
4.1 JFS long long 64 GB 2 GB N 8/94
Solaris 2.x (Sun Sparc) long long 1 TB 2 GB (soon?) 9/94
BSD 4.4 long long ? 8 EB? ? 8/94
Linux long long 1 TB 2 GB ? 9/94
DG/UX 5.4 long long 2 TB 2 GB ? 9/94
Alliant Concentrix long long ?>2 GB ?>2 GB N 9/94 (dead)
* The Paragon PFS (Parallel File System), as I understand it, parallelizes
access to the files; each partition striped across is limited to 2GB, so
really the max partition size is 2GB * # of disks that can be attached.
A slightly more detailed
description of certain implementations
is available with the WWW version.
In addition, the HPSS (see above) supports large files, as does
Unitree (though the Unitree interface to them is limited).
Subject: [7.9] Non-Unix Large File Systems
From: File Systems
(info about non-Unix large FSes also welcome; SHMO)
OpenVMS (any version) supports 2TB files (32-bit unsigned block
number, 9-bit offset) through its RMS interface (still limited to 2GB
through the C run-time library), but file systems are limited to ~7GB
(as of Open AXP 1.5 and OpenVMS VAX 6.0 the max volume size has been
bumped to 1 TB). (from a friend, rdv, 94/8/26, and Rod Widdowson,
Filesystems group, OpenVMS engineering, Scotland).
Subject: [8] (Device) Interfaces
From: (Device) Interfaces
There is a new web site with lots of info at
http://www.cit.ac.nz/smac/cbt/hwsys/bussys/default.htm (rdv, 96/2/21)
Looks like it's class notes, so no idea how long it will stay up.
Don't forget to see http://www.cmpcmm.com/cc/.
Subject: [8.1] SCSI {Full}
From: (Device) Interfaces
SCSI is the Small Computer System Interface. It is standardized by
ANSI X3T9.2. It is mostly aimed at storage devices, with command sets
defined for disks, tapes, and autochangers, but also includes
communications devices, printers, and scanners.
It's daisy-chained, with a maximum of eight devices (including the
host computer) on a single narrow bus (there are non-standard schemes
for 16 devices on a wide bus). Any device can be an initiator, so it's
possible to use the bus for sharing devices between hosts, provided
your software can manage it.
See also the newsgroup comp.periphs.scsi, especially for "How do I
hook up a Brand X diskdrive to my Atavachron 9000 PDA?" type
questions.
There is also an FTP site for some working documents for the SCSI-3
committees and other X3T10 documents. See ftp://ftp.symbios.com or
ftp.hmpd.com.
You'll find good info at http://www.symbios.com/x3t10/ and at
http://www.scsita.com.
Subject: [8.1.1] Single ended vs differential
From: (Device) Interfaces
This distinction is at the eletrical signalling level. However,
single-ended is limited to total bus lengths of 6.0 meters, while
differential can go up to 25 meters (SCSI-II). Differential is
generally more robust to noise and cross-talk, but the bus drivers are
more expensive. In theory no difference in transfer speed or
capabilities, but in practice the added noise margin could mean higher
_reliable_ transfer rates on your system, especially if your bus is
long.
Most disk drives and most low-end products are available only with a
single-ended interface. A few devices are available with either as a
purchase option, and a few are switchable by the user.
The cables and connectors are the same for both, though the pinouts
are (naturally) somewhat different.
Plugging a single-ended device into a running differential bus or
vice-versa may result in damage to one or more devices. Most newer
devices have fuses or protection circuits utilitizing the DIFFSENSE
signal to prevent device damage.
There are now recommended icons used to distinguish between the two:
single-ended differential
/\ //\
/ \ // \
< -- << --
\ / \\ /
\/ \\/
Converters do exist that will allow you to hook up single-ended
devices to a differential bus and vice-versa. People who have used
them say they work great, but in theory they shouldn't work :-). As I
understand it, changing the signalling introduces delays in some of
the control signals that means that some devices could miss certain
signal transitions. The best advice is to borrow one and try it, and
see if it works in your system. One company's name is Paralan,
(619)560-7266.
Subject: [8.1.2] Asynchronous vs Synchronous Transfers
From: (Device) Interfaces
Asynchronous transfers mean that every single byte must be
acknowledged before the next can be transfered. Synchronous means that
the device sending data can drop a series of transfers onto the bus,
toggling REQ or ACK (as appropriate), and then sit back and wait for
the corresponding pulses to return from the other device.
Async transfers, involving much more waiting, are correspondingly
slower. 2-4 MB/sec are good values for async transfers.
Sync transfer speeds are established during a negotiation between the
initiator and target, but devices are not required to use the full
speed they negotiate for. This speed represents the maximum burst rate
your device will use. Common values are 5 and 10 MB/sec.
In practice, virtually every modern device supports synchronous
transfers, but some implementations are better than others.
Subject: [8.1.3] SCSI-I vs SCSI-II vs SCSI-III
From: (Device) Interfaces
SCSI (now commonly known as SCSI-I) was the original 1986 standard,
X3.131-1986. It specified the electrical level and some of the
mid-layer issues involving messages and packet structure, but (I
believe, my memory's bad) didn't formalize the Common Command Set
(CCS), that was done independently. It supported a maximum burst rate
of 5 MB/sec. on an 8-bit bus.
ADDITIONAL INFORMATION
Consult the SCSI standards documents, and the manuals for the device you
are working with for more information. The "SCSI 1" specification
document is called SCSI Specification, ANSI X3T9.2/86-109. Also of
interest is the Common Command Set specification document SCSI CCS
Specification, ANSI X3T9.2/85-3
SCSI-II received final approval in early 1994, but has been a de facto
standard for several years. The CCS was standardized for a variety of
different types of peripherals. The max allowable transfer rate was
raised to 10 MT/s (see below). A 16-bit bus (Wide SCSI) and 32-bit bus
(double-wide SCSI) are specified (see below).
SCSI-III is the latest effort, and involves more cleanly separating
the functionality into layers; the command layer is defined
independently from the physical layer. In addition to the traditional
parallel cable, there are efforts going on to define physical layers
for Fibre Channel and a more generic Serial SCSI. Thus, there will be
no SCSI-IV; only the individual pieces will be updated as necessary.
Subject: [8.1.4] Fast-Wide SCSI
From: (Device) Interfaces
The max allowable transfer rate was raised to 10 MT/s (mega-transfers
per second) in SCSI-2, referred to as Fast SCSI. Note that this is NOT
required, devices running at ANY speed below that may claim to be
SCSI-II compliant! Fast implies SCSI-II, not the other way around!
Fast Narrow is thus 10 MB/sec. Both the initiator (computer) and
target (peripheral) must support fast transfer for it to be of any
use, but intermixing fast and slow devices on a bus presents no
operational problems (only performance ones).
A 16-bit bus (Wide SCSI) and 32-bit bus (double-wide SCSI) are
specified in SCSI-2. The wide busses require the use of a second cable
in SCSI-2. The first cable is 50 pins, known as the A cable; the 2nd
is 68 pins, known as the B cable. I know of no one actually using
32-bit SCSI, but it would also run on an A/B cable pair. Slow (or
Normal) Wide is thus 5 MT/s * 2 Bytes/T, 10 MB/sec. Fast Wide is 20
MB/sec. Fast Double Wide would be 40 MB/sec.
In the SCSI-3 physical layer spec (SCSI-PH), a single 68-pin cable,
known as the P cable, is allowable for 8 or 16-bit busses. This is the
option most people who have implemented Wide SCSI have chosen for the
cabling, even though their upper layer is generally SCSI-2.
There is a small movement (heard here on the net occassionally) to
promote an Ultra-SCSI high-speed bus, with a burst rate of something
like 20 MT/sec on very short cables. At present it is unclear what
will happen to this effort. There is also talk, in conjunction with a
change to low-voltage differential signalling, to go to 40MT/sec.
Subject: [8.1.5] Shared Busses / Performance {Brief}
From: (Device) Interfaces
Also known as, "It's only a 500KB/sec. tape drive, why do I care if
the burst rate is only 2 MB/sec.?" or gets good marks for "plays well
with others".
Most of this is relevant to all shared busses, not just SCSI.
burst v. sustained performance, disconnect, command overhead, etc.
Subject: [8.1.6] Cabling/Hot Plugging {Brief}
From: (Device) Interfaces
Nominally not supported.
Subject: [8.1.7] Third Party Transfers/Separation of Control & Data Paths {Brief}
From: (Device) Interfaces
SCSI-2 has commands that support third-party copying of data; one
initiator tells device A to copy to device B. I don't know of any
devices actually using this.
Separation of control & data paths is a popular topic these days; can
somebody comment on whether or not SCSI-3 supports this? I don't think
so. (SHMO)
Subject: [8.2] IDE {Brief}
From: (Device) Interfaces
PC use
Does not support overlapped I/O.
Subject: [8.3] IPI {None}
From: (Device) Interfaces
Subject: [8.4] HIPPI {Brief}
From: (Device) Interfaces
32-bit transfers at 25 MT/sec., 100 MB/sec. High Performance Parallel
Interface is a unidirectional channel, i.e. you have to have an OUT
cable and and IN cable for bidirectional transfers (you could have
just one, if it's a read-only device like a scanner or write-only like
a frame buffer). HiPPI is not a shared bus, but its frames can be
switched through a crossbar switch (Network Systems is the premiere
vendor).
HiPPI is used for supercomputer-to-supercomputer networking (TCP/IP,
no less), for RAID arrays (from Maximum Strategy, IBM and others),
tape drives (Sony ID-1 drive), frame buffers and increasingly
workstations (SGI and IBM support HiPPI, and 3rd-party Sbus cards
exist for Sun).
Due partly to the high overhead of HiPPI connections, many devices
have elected to separate the control path from the data path. A common
control path in that case is ethernet.
Good resources from the HiPPI Networking Forum on the web at
http://www.esscom.com/hnf/.
Subject: [8.4.1] HIPPI-6400 {Brief}
From: (Device) Interfaces
An effort aimed at reaching 6400 Mbps (800 Mbytes/sec.) around the end
of 1996.
From rev 0.15 of the HIPPI-6400-PH specification, dated March 4, 1996,
ftp'ed from ftp.network.com:X3T11/hippi/hippi-6400-ph_0.15.ps.
Looks like the copper interface will be a cable with 44 micro-coax
conductors, 22 in each direction. That's 16 data, 4 control, clock,
and frame. A micro-packet is 32 data bytes and 64 bits of control
information. I guess this means they're planning on 400 Mbps on each
data line. The fiber variant uses 12 multimode fibers (in each
direction, I presume, though it doesn't seem to say that): 8 data + 2
control + frame + clock, so presumably 800 Mbps on each fiber. Cable
lengths in both cases TBD.
Subject: [8.5] Ultranet {Brief}
From: (Device) Interfaces
Fiber to the host, a hub with a backplane running at a total rate of
~1Gbps.
Subject: [8.6] Ethernet {Brief}
From: (Device) Interfaces
Generally related to normal inter-host networking, but also used as
a control path for some HiPPI devices. Ampex also uses NetSCSI over
ethernet to control their autochangers. Also, obviously, used for
connecting many servers to their clients. Standard today is 10 Mbps,
100 Mbps (fast ethernet) is becoming more common.
Subject: [8.7] FDDI {None}
From: (Device) Interfaces
Subject: [8.8] Fibre Channel Standard (FCS)
From: (Device) Interfaces
Rich Taborek of Amdahl has created an excellent web page on Fibre
Channel at http://www.amdahl.com/ext/CARP/FCA/FCA.html.
ftp.network.com [Has draft Fibre Channel documents]
playground.1.com [Has FCSI Fibre Channel Profiles]
(rdv, 95/5/18 from Louis Grantham <Louis.Grantham@dalsemi.com>)
Fibre Channel runs over coax or optical fibre (single or multimode),
and even twisted pair. Fibre Channel comes in two basic forms --
Aribtrated Loop and switched fabric, which aren't (yet)
interoperable. The host interfaces are rapidly becoming cheaper, but
the switches are still expensive.
Fibre Channel standards define several functional levels, from the
physical interface up to the mapping to upper level functionality,
e.g. how to do SCSI commands over FC. FC provides several "classes"
of service, including dedicated circuit and acknowledged and
unacknowledged datagrams. Can also be used for IP. (rdv, 96/10/28)
Subject: [8.9] ESCONN/SBCON {Brief}
From: (Device) Interfaces
Enterprise Systems CONNect. IBM's new mainframe attach -- fiber, I
believe. The standardized version of this is known as SBCON, and Rich
Taborek has once again created an excellent web page at
http://www.amdahl.com/ext/CARP/SBCON/SBCON.html.
Subject: [8.10] IEEE P1394 (Serial Bus)
From: (Device) Interfaces
Apple's new standard for connecting devices via a high-speed serial
bus. Good info at http://www.skipstone.com. Also some info FTPable at
ftp.apple.com (I think that's where I got those papers.) (rdv,
95/5/15)
After having been somewhat dormant for a while, standards activity on
new versions of 1394 is heating up again. Faster versions are in the
works, as is a protocol for doing disks across it. (rdv, 96/10/28)
Subject: [8.11] Serial Storage Architecture (SSA)
From: (Device) Interfaces
IBM's new offering in the serial device interface sweepstakes.
Some docs and tentative working standards available on the SCSI ftp
site: ftp.symbios.com:pub/standards/io/ssa The SSA Industry
Association has a web server at http://www.ssaia.org. Disk drives from
Conner and Micropolis, and Pathlight and Adaptec and expected to do
host adapters.
Subject: [8.12] S2I: IEEE P1285 Scalable Storage Interface
From: (Device) Interfaces
Chaired by Martin Freeman, Philips Research, this is an effort to
standardize attaching disk drives directly to a system bus, making the
disk's buffers readable as regular memory to the CPU. Sort of the
opposite of network-attached storage, this couples the storage device
design more closely to the hardware and OS of the host system. See
http://sunrise.scu.edu/P1285Home.html for more info. (rdv, 1995/12/22)
Subject: [8.13] Multibus, Unibus, Mainframe Channels, and other history {None}
From: (Device) Interfaces
Subject: [9] Other
From: Other
Subject: [9.1] Video vs Datagrade tapes {brief, 5/94}
From: Other
cost vs reliability
Are datagrade really more reliable?
Warrantee of drive
Cleaning cycle of drive
Headlife of drive
Subject: [9.2] Compression
From: Other
See the comp.compression FAQ, and don't believe everything a vendor
tells you. 2x compression is the standard going rate for lossless
compression of arbitrary data, though some vendors claim 2.5 or 3x.
Your mileage will vary with your data type.
Compressing tape drives are common, but for disks and other block
devices I don't know of anything being done. The unpredictability of
the compression ratio generally makes it inappropriate for devices
that need fixed capacities and addresses.
Online compression of files can be accomplished by hand using
utilities such as gzip and Unix compress. Some systems support
software compression of files in the file system software, and will
transparently compress and decompress files as needed. Stacker for PCs
is one example; for Unix-like systems this seems to be common research
for object-oriented file systems (including the GNU Hurd), but I don't
know of any production versions offhand (SHMO).
Compression may make your data more vulnerable to errors. A single
error early in a compressed stream of data can render the entire data
stream unreadable.
Subject: [10] Benchmarking
From: Benchmarking
See the comp.benchmarks FAQ, and don't believe everything a vendor
tells you.
There's a good paper on a new I/O benchmarking technique that also
covers the pitfalls of I/O benchmarking in the Nov. '94 ACM
Transactions on Computer Systems -- "A New Approach to I/O Performance
Evaluation -- Self-Scaling I/O Benchmarks, Predicted I/O Performance",
Peter Chen and David Patterson.
Bonnie, IOZONE, IOBENCH, nhfsstone, one of the SPECs (SFS), are all
useful for measuring I/O performance. There is also a program called
BENCHMARK available from infotech@digex.com -- apparently a
standardized set of scripts to test remote access to mass storage
systems.
In particular, note that based on a discussion here recently (8/96),
it appears that some magazines (who ought to know better) are using
HDT BenchTest as a disk drive performance measure, with the I/O sizes
set so small that the disk drive cache is covering them all, resulting
in anomalously high data rates (especially write rates).
http://home.hkstar.com/~tamws/comp/bench/hdbench.htm is the start of a
reasonable-looking benchmark for PC hard drives (posted by
tamws@hkstar.com, 9/96)
==== SPEC SFS ====
SPEC's System-level File Server (SFS) workload measures NFS server
performance. It uses one server and two or more "load generator"
clients.
SPEC-SFS is not free; it costs US$1,200 from the SPEC corporation.
There's a FAQ about SPEC posted sometimes in comp.benchmarks.
Subject: [11] Mass Storage Conferences
From: Mass Storage Conferences
There are two main academic conferences devoted specifically to mass
storage (in addition to, of course, the supercomputer and OS
conferences, and interesting stuff in databases, optical
conferences, Usenix, SOSP...).
NASA Goddard Space Flight Center and the IEEE run two conferences, in
an 18-month or so alternating pattern.
You'll find my notes on the letest Goddard conference at
http://www.isi.edu/~rdv/conferences/goddard96.html.
The contact for the NASA Mass Storage Conference (Sept. 17-19, 1996):
Jorge Scientific Corporation
7500 Greenway Center Drive
Suite 1130
Greenbelt, MD USA 20770
tel(301)220-1701
fax(301)220-1704
or if that fails email bkobler@gsfcmail.nasa.gov or
ben.kobler@gsfc.nasa.gov. There is some info available on the web at
http://esdis.gsfc.nasa.gov/msst/msst.html.
Also, the latest IEEE was in September '95:
* The 14th IEEE Mass Storage Symposium was September 11-14, 1995 at
Monterey, CA. More info from Bernie O'Lear (olear@ncar.ucar.edu) or
Sam Coleman (scoleman@llnl.gov).
Also of interest, there are the conferences on Very Large Database
Systems. I have a reference somewhere...
Interesting material shows up in the SPIE conferences.
Subject: [11.0.1] THIC Tape Head Interface Committee {Brief, New}
From: Mass Storage Conferences
I would like to bring to your attention the THIC Home Page at the URL
http://www.thic.org/thic/ and its anonymous ftp archives at the URL
ftp://ftp.uu.net/vendor/THIC/
THIC started out in the early 70's as the Tape Head Interface Committee
under the auspices of the DoD, but has since grown and expanded to embrace
most data recording technologies. THIC has been meeting four times a year,
alternating between the east and west coasts. The last meeting was in
Seattle WA on Jan 21 and 22, 1997, and the next will be on April 22 and 23
at the DoubleTree in Tysons Corner VA. The papers range from marketing,
new product announcement and discussion, to the problems of the various
recording technologies. Since October 1995, I have been trying to collect
as many of the papers as I could from each of the meetings and have been
placing them in Adobe PDF on the THIC archives at ftp.uu.net. I also
maintain a no-frills home page where the agenda is displayed, with links to
papers which are available in the archives. (P.C. Hariharan, 97/2)
Subject: [12] MTBF (Mean Time Between Flareups, er, Failures)
From: MTBF (Mean Time Between Flareups, er, Failures)
There is a short FAQ-like document available from IBM at
http://www.storage.ibm.com/storage/oem/tech/mtbf.htm. No math for the
statistically inclined, but explains in clear prose what IBM at least
means when they say MTBF.
I will also note that, for a complex but reparable system such as an
autochanger, each subsystem may have a separate MTBF and a different
lifetime, which may be combined to give one figure for the unit as a
whole.
Here is a reasonably understandable, but somewhat long, description of
MTBF. Thanks to Kevin Daly (president of Odetics, kdaly@odetics.com)
wrote in 10/95 for this FAQ. After some waffling, I've included the
whole thing, despite its length.
===============================================================
M T B F
In order to understand MTBF (Mean Time Between Failures) it is best to
start with something else -- something for which it is easier to
develop an intuitive feel. This other concept is failure rate which
is, not surprisingly, the average (mean) rate at which things fail. A
"thing" could be a component, an assembly, or a whole system. Some
things -- rocks, for example -- are accepted to have very low failure
rates while others -- British sports cars, for example -- are (or
should be) expected to have relatively high failure rates.
It is generally accepted among reliability specialists (and you,
therefore, must not question it) that a thing's failure rate isn't
constant, but generally goes through three phases over a thing's
lifetime. In the first phase the failure rate is relatively high, but
decreases over time -- this is called the "infant mortality" phase
(sensitive guys these reliability specialists). In the second phase
the failure rate is low and essentially constant -- this is
(imaginatively) called the "constant failure rate" phase. In the
third phase the failure rate begins increasing again, often quite
rapidly, -- this is called the "wearout" phase. The reliability
specialists noticed that when plotted as a function of time the
failure rate resembled a familiar bathroom appliance -- but they
called it a "bathtub" curve anyway. The units of failure rate are
failures per unit of "thing-time"; e.g. failures per machine-hour or
failures per system-year.
What, you may ask, does all this have to do with MTBF? MTBF is the
inverse of the failure rate in the constant failure rate phase.
Nothing more and nothing less. The units of MTBF are (or, should be)
units of "thing-time" pre failure; e.g. machine-hours per failure or
system-years per failure but the "thing" part and the "per failure"
part are almost always omitted to enhance the mystique and confusion
and to make MTBF appear to have the units of "time" which it doesn't.
We will bow to the convention of speaking of MTBF in hours or years --
but we all know what we really mean.
What does MTBF have to do with lifetime? Nothing at all! It is not
at all unusual for things to have MTBF's which significantly exceed
their lifetime as defined by wearout -- in fact, you know many such
things. A "thirty-something" American (well within his constant
failure rate phase) has a failure (death) rate of about 1.1 deaths per
1000 person-years and, therefore, has an MTBF of 900 years (of course
its really 900 person-years per death). Even the best ones, however,
wear out long before that.
This example points out one other important characteristic of MTBF --
it is an ensemble characteristic which applies to populations (i.e.
"lots") of things; not a sample characteristic which applies to one
specific thing. In the good old days when failure rates were
relatively high (and, therefore, MTBF relatively low) this
characteristic of MTBF was a curiosity which created lively (?) debate
at conventions of reliability specialists (them) but otherwise didn't
unduly bother right-thinking people (us). Things, however, have
changed. For many systems of interest today the required failure
rates are so low that the MTBF substantially exceeds the lifetime
(obviously nature had this right a long time ago). In these cases
MTBF's are not only "not necessarily" sample characteristics, but are
"necessarily not" sample characteristics. In the terms of the
reliability cognoscenti, failure processes are not ergodic (i.e. you
can't blithely trade population statistics for time statistics). The
key implication of this essential characteristic of MTBF is that it
can only be determined from populations and it should only be applied
to populations.
MTBF is, therefore an excellent characteristic for determining how
many spare hard drives are needed to support 1000 PC's, but a poor
characteristic for guiding you on when you should change your hard
drive to avoid a crash.
MTBF's are best determined from large populations. How large? From
every point of view (theoretical, practical, statistical) but cost,
the answer is "the larger, the better". There are, however, well
established techniques for planning and conducting test programs to
develop specified levels of confidence in a thing's MTBF.
Establishing an MTBF at the 80% confidence level, for example, is
clearly better, but much more difficult and expensive, than doing it
at a 60% confidence level. As an example, a test designed to
demonstrate a thing's MTBF at the 80% confidence level, requires a
total thing-time of 160% of the MTBF if it can be conducted with no
failures. You don't want to know how much thing-time is required to
achieve reasonable confidence levels if any failures occur during the
test.
What, by the way, is "thing-time"? An important subtlety is that
"thing-time" isn't "clock time" (unless, of course, your thing is a
clock). The question of how to compute "thing-time" is a critical one
in reliability engineering. For some things (e.g. living thing) time
always counts but for others the passage of "thing-time" may be highly
dependent upon the state of the thing. Various ad hoc time
corrections (such as "power on hours" (POH)) have been used, primarily
in the electronics area. There is significant evidence that, in the
mechanical area "thing-time" is much more related to activity rate
than it is to clock time. Measures such as "Mean Cycles Between
Failures (MCBF)" are becoming accepted as more accurate ways to assess
the "duty cycle effect". Well-founded, if heuristic, techniques have
been developed for combining MCBF and MTBF effects for systems in
which the average activity rate is known.
MTBF need not, then be "Mysterious Time Between Failures" or
"Misleading Time Between Failures", but an important system
characteristic which can help to quantify the suitability of a system
for a potential application. While rising demands on system integrity
may make this characteristic seem "unnatural", remember you live in a
country of 250 million 9- million-hour MTBF people!
===================================================================
Kevin C. Daly
President
ATL Products
kdaly@odetics.com
(714) 774-6900
Subject: [13] Mass Storage Reports
From: Mass Storage Reports
There are a number of consultants who also write regularly updated
in-depth reports (and sometimes post here) about various aspects of
the mass storage market; if you're going to get into this business or
are planning on spending many thousands or millions of dollars on
equipment, talking to one of them might be a good idea.
Sanjay Ranade (infotech@digex.com) is one of the ones who both writes
and posts here (he also has a couple of reasonably-priced books about
mass storage). Infotech's reports include HSM, network backup,
magtape and libraries.
Others include Disk/Trend (Mountain View, CA, 405-961-6209)
http://www.disktrend.com (good info there) and Freeman Reports
(805-963-3853).
Strategic Research Corporation has numerous white papers and good
links available at http://www.sresearch.com, including networked
storage. Some of them seem biased in particular directions, so caveat
emptor.
Subject: [14] Network-Attached Peripherals {Brief}
From: Network-Attached Peripherals {Brief}
Coming soon. My own research is in this area; if you're lucky you
might find some pointers by going through my home page
http://http://alumni.caltech.edu/~rdv/. Contributions welcome.
Look for "A Brief Survey of Current Work on Network-Attached
Peripherals" in the January '96 ACM Operating Systems Review, by yours
truly. An expanded, updated version is available on the web at
http://www.isi.edu/~rdv/netstation/nap-research/. (rdv, 96/1/22)
http://www.cs.cmu.edu/Web/Groups/PDL/ is Garth Gibson's Parallel Data
Lab, where they're doing excellent work on network-attached storage
devices.
At Lawrence Livermore, they're doing a network-attached RAID array to
integrate into HPSS; see
http://www.llnl.gov/liv_comp/siof/siof-nap.html.
The ViewStation work at MIT,
http://tns-www.lcs.mit.edu/tns-www-home.html is concentrating on
ATM-attached peripherals, using ATM as a system-area network.
The Netstation project http://www.isi.edu/netstation/ (which I work on
at ISI) is focusing on IP-connectible peripherals, using a gigabit
network as the system backplane.
Subject: [15] Other References
From: Other References
Subject: [15.1] Print
From: Other References
Computer Technology Review magazine, 310/208-1335, free to some.
Electronic News, weekly, 800/722-2346.
MacWeek, June 7, 1993, Page 36+
IEEE Computer had a full issue in March 94 on I/O subsystems
There are also two books by Sanjay Ranade (infotech@digex.com), who
posts here occassionally. One is _Mass Storage Technologies_
(1991ish?), the other, newer one is _Mass Storage Systems_. I've read
the first one, it's a little short on detail but a good overview.
Subject: [15.2] Web
From: Other References
http://www.cmpcmm.com/cc/ standards and tons of info.
http://www.nml.org performance reports, media surveys, etc. Goes into a
lot of detail on topics such as archival stability.
http://www.yahoo.com/Business/Corporations/Computers/Peripherals/Storage/
lists some resellers and manufacturers of storage.
http://theref.c3d.rl.af.mil has good information about PC hardware,
including old interfaces, floppies, controllers, etc. It has a LONG
list of specs for hard drives.
http://www.cs.yorku.ca/People/frank/Welcome.html also has good info on
hard drives and CD-ROM drives.
http://www.sresearch.com/search/105008.htm lists storage products and
market projections.
Subject: [15.3] Newsgroups
From: Other References
You're in the primary one (comp.arch.storage). You'll also find info
in the groups on SCSI, PC hardware, and specific operating systems.
I'll try to add pointers to their FAQs soon.
The FAQ for comp.sys.ibm.pc.hardware.storage can be found at
http://thef-nym.sci.kun.nl/~pieterh/storage.html.
Subject: [15.4] Research Papers
From: Other References
I'm collecting reviews and a list of papers now, I expect to add it in
a few weeks. Contributions/suggestions welcome.
Subject: [16] ORIGINAL CALL FOR VOTES
From: ORIGINAL CALL FOR VOTES
NAME:
comp.arch.storage
STATUS:
unmoderated
DESCRIPTION:
storage system issues, both software and hardware
CHARTER:
To facilitate and encourage communication among people interested in computer
storage systems. The scope of the discussions would include issues relevant
to all types of computer storage systems, both hardware and software. The
general emphasis here is on open storage systems as opposed to platform
specific products or proprietary hardware from a particular vendor. Such
vendor specific discussions might belong in comp.sys.xxx or comp.periphs.
Many of these questions are at the research, architectural, and design levels
today, but as more general storage system products enter the market,
discussions may expand into "how to use" type questions.
RATIONALE:
As processors become faster and faster, a major bottleneck in computing
becomes access to storage services: the hardware - disk, tape, optical,
solid-state disk, robots, etc., and the software - uniform and convenient
access to storage hardware. A far too true comment is that "A supercomputer
is a machine that converts a compute-bound problem into an I/O-bound
problem." As supercomputer performance reaches desktops, we all experience
the problems of:
o hot processor chips strapped onto anemic I/O
architectures
o incompatable storage systems that require expensive
systems integration gurus to integrate and
maintain
o databases that are intimately bound into the quirks of an
operating system for performance
o applications that are unable to obtain guarantees on when
their data and/or metadata is on stable storage
o cheap tape libraries and robots that are under-utilized
because software for migration and caching to
disk is not readily available
o nightmares in writing portable applications that attempt
to access tape volumes
This group will be a forum for discussions on storage topics including the
following:
>1. commercial products - OSF Distributed File System (DFS)
based on Andrew, Epoch Infinite Storage Manager and
Renaissance, Auspex NS5000 NFS server, Legato
PrestoServer, AT&T Veritas, OSF Logical Volume Manager,
DISCOS UniTree, etc.
>2. storage strategies from major vendors - IBM System
Managed Storage, HP Distributed Information Storage
Architecture and StoragePlus, DEC Digital Storage
Architecture (DSA), Distributed Heterogeneous Storage
Management (DHSM), Hierarchical Storage Controllers, and
Mass Storage Control Protocol (MSCP)
>3. IEEE 1244 Storage Systems Standards Working Group
>4. ANSI X3B11.1 and Rock Ridge WORM file system standards
groups
>5. emerging standard high-speed (100 MB/sec and up)
interconnects to storage systems: HIPPI, Fiber Channel
Standard, etc.
>6. POSIX supercomputing and batch committees' work on
storage volumes and tape mounts
>7. magnetic tape semantics ("Unix tape support is an
oxymoron.")
>8. physical volume management - volume naming, mount
semantics, enterprise-wide tracking of cartridges, etc.
>9. models for tape robots and optical jukeboxes - SCSI-2,
etc.
>10. designs for direct network-attached storage (storage as
black box)
>11. backup and archiving strategies
>12. raw storage services (i.e., raw byte strings) vs.
management of
structured data types (e.g. directories, database
records,...)
>13. storage services for efficient database support
>14. storage server interfaces, e.g., OSF/1 Logical Volume
Manager
>15. object server and browser technology, e.g. Berkeley's
Sequoia 2000
>16. separation of control and data paths for high performance
by removing the control processor from the data path;
this eliminates the requirements for expensive I/O
capable (i.e., mainframe) control processors
>17. operating system-independent file system design
>18. SCSI-3 proposal for a flat file system built into the
disk drive
>19. client applications which bypass/ignore file systems:
virtual memory, databases, mail, hypertext, etc.
>20. layered access to storage services - How low level do we
want device control? How to support sophisticated, high
performance applications that need to bypass the file
abstraction?
>21. migration and caching of storage objects in a distributed
hierarchy of media types
>22. management of replicated storage objects
(differences/similarities to migration?)
>23. optimization of placement of storage objects vs. location
transparency and independence
>24. granularity of replication - file system, file, segment,
record, etc.,
>25. storage systems management - What information does an
administrator need to manage a large, distributed storage
system?
>26. security issues - Who do you trust when your storage is
directly networked?
>27. RAID array architectures, including RADD (Redundant
Arrays of Distributed Disks) and Berkeley RAID-II HIPPI
systems
>28. architectures and problems for tape arrays - striped tape
systems
>29. stable storage algorithm of Lampson and Sturgis for
critical metadata
>30. How can cheap MIPS and RAM help storage? - HP DataMesh,
write-only disk caches, non-volatile caches, etc.
>31. support for multi-media or integrated digital continuous
media (audio, video, other realtime data streams)
This group will serve as a forum for the discussion of issues which do not
easily fit into the more tightly focused discussions in various existing
newsgroups. The issues are much broader than Unix (comp.1.*, comp.os.*),
as they transcend operating systems in general. Distributed computer systems
of the future will offer standard network storage services; what operating
system(s) they use (if any) will be irrelevant to their clients. The
peripheral groups (comp.periphs, comp.periphs.scsi) are too hardware oriented
for these topics. Several of these topics involve active standards groups
but several storage system issues are research topics in distributed systems.
In general, the standards newsgroups (comp.std.xxx) are too narrowly focused
for these discussions.
Subject: [17] Original Author's Disclaimer and Affiliation:
From: Original Author's Disclaimer and Affiliation:
This information is believed to be reasonably accurate although I do
not verify every submission. Neither the United Stages Government nor any
agency thereof, nor any of their employees, makes any warranty, express or
implied, or assumes any legal liability or responsibility for the accuracy,
completeness, or usefullness of any informatin, apparatus, product, or
process disclosed, or represents that its use would not infringe privately
owned rights. Reference herein to any specific commercial product, process,
or service by trade name, trademark, manufacturer, or otherwise, does not
necessarily constitute or imply its endorsement, recommendatin, or favoring
by the United States Government or any agency thereof. The views and
opinions of authors expressed herein do not necessarily state or reflect
those of the United States Government or any agency thereof.
---
Joseph Stith, stith@fnal.gov, 708/840-3846
Assistant to the Computing Division Head -- IRM Planning
Computing Division, Fermilab, PO Box 500, MS 120, Batavia, IL 60510
Subject: [18] Copyright Notice
From: Copyright Notice
This compilation of material is copyright Rod Van Meter,
rdv@alumni.caltech.edu. Permission is granted to copy this material,
provided this copyright notice is retained. The contents are not to be
significantly modified without the express written consent of the
author.
This is just to keep the various authors of this material from being
substantially misquoted or abused, not to restrict use of the
information.
Permission to include this FAQ in published compilations (CD-ROM or
book) will be granted upon direct request.
Subject: [19] Additional Topics to be added
From: Additional Topics to be added
File Systems: Unix, IBM, VMS, Tops-20, Extent-based, Amiga, Mac
(resource & data forks)
FTP Sites
Volume Sets & Partitions
Important People/Mass Storage History
Books & Other Publications
Principles for Evaluating New Technologies
Performance Evaluation
cacheing
seek time measurement
concurrent operations
queueing theory
Head Lifetime
Versioning in File Systems
Managing Risk
Media Migration/Managing Change
Physical v. Logical Addressing (seek optimizations, etc.)
Channels v. Busses
Intelligent Storage Subsystems
DEC's HSC-50 and star cluster for VAXen
Mainframe & Supercomputer I/O controllers
Security
The broadcast and home audio/video / mass storage connection
Databases and Mass Storage
File System Research: watchdogs, named pipes, compressing FSes
The naming problem: Prospero
Distributed Locking & Update
Content-Addressable Storage & Other Unusual Ideas
The old film-storage system Sam Coleman talks about
Byte Ordering
Supercomputer Storage
Companies: Adstor, Avastor
I/O Benchmarks
User file systems
System CPU & bus loads for file system work
Memory-Mapped Files
Persistent Object Systems & their files
The VFS layer in Unix
What to look for in a backup product
Offsite Storage v. Network Backup
Test Equipment -- SCSI & HiPPI Analyzers
(reorganize along small user/large user/developer lines?)
(need to date every entry if possible)
terminology
|
Comment about this article, ask questions, or add new information about this topic: