Merge tag 'upstream/2.05.3'

Upstream version 2.05.3
This commit is contained in:
Mario Fetka 2017-11-01 12:03:46 +01:00
commit 1c03890432
6 changed files with 1356 additions and 0 deletions

678
amd64/7.31/README.TXT Normal file
View File

@ -0,0 +1,678 @@
--------------------------------------------------------------------
README.TXT
Adaptec Storage Manager (ASM)
as of May 7, 2012
--------------------------------------------------------------------
Please review this file for important information about issues
and erratas that were discovered after completion of the standard
product documentation. In the case of conflict between various
parts of the documentation set, this file contains the most
current information.
The following information is available in this file:
1. Software Versions and Documentation
1.1 Adaptec Storage Manager
1.2 Documentation
2. Installation and Setup Notes
2.1 Supported Operating Systems
2.2 Minimum System Requirements
2.3 General Setup Notes
2.4 Linux Setup Notes
2.5 Debian Linux Setup Notes
3. General Cautions and Notes
3.1 General Cautions
3.2 General Notes
4. Operating System-Specific Issues and Notes
4.1 Windows - All
4.2 Windows 64-Bit
4.3 Linux
4.4 Debian and Ubuntu
4.5 FreeBSD
4.6 Fedora and FreeBSD
4.7 Linux and FreeBSD
4.8 VMware
5. RAID Level-Specific Notes
5.1 RAID 1 and RAID 5 Notes
5.2 RAID 10 Notes
5.3 RAID x0 Notes
5.4 RAID Volume Notes
5.5 JBOD Notes
5.6 Hybrid RAID Notes
5.7 RAID-Level Migration Notes
6. Power Management Issues and Notes
7. "Call Home" Issues and Notes
8. ARCCONF Issues and Notes
9. Other Issues and Notes
--------------------------------------------------------------------
1. Software Versions and Documentation
1.1. Adaptec Storage Manager Version 7.3.1, ARCCONF Version 7.3.1
1.2. Documentation on this DVD
PDF format*:
- Adaptec Storage Manager User's Guide
- Adaptec RAID Controller Command Line Utility User's Guide
*Requires Adobe Acrobat Reader 4.0 or later
HTML and text format:
- Adaptec Storage Manager Online Help
- Adaptec Storage Manager README.TXT file
--------------------------------------------------------------------
2. Installation and Setup Notes
- The Adaptec Storage Manager User's Guide contains complete installation
instructions for the Adaptec Storage Manager software. The Adaptec
RAID Controllers Command Line Utility User's Guide contains
complete installation instructions for ARCCONF, Remote ARCCONF,
and the Adaptec CIM Provider. The Adaptec RAID Controllers
Installation and User's Guide contains complete installation
instructions for Adaptec RAID controllers and drivers.
2.1 Supported Operating Systems
- Microsoft Windows*:
o Windows Server 2008, 32-bit and 64-bit
o Windows Server 2008 R2, 64-bit
o Windows SBS 2011, 32-bit and 64-bit
o Windows Storage Server 2008 R2, 32-bit and 64-bit
o Windows Storage Server 2011, 32-bit and 64-bit
o Windows 7, 32-bit and 64-bit
*Out-of-box and current service pack
- Linux:
o Red Hat Enterprise Linux 5.7, 6.1, IA-32 and x64
o SuSE Linux Enterprise Server 10, 11, IA-32 and x64
o Debian Linux 5.0.7, 6.0 IA-32 and x64
o Ubuntu Linux 10.10, 11.10, IA-32 and x64
o Fedora Linux 14, 15, 16 IA-32 and x64
o Cent OS 5.7, 6.2, IA-32 and x64
o VMware ESXi 5.0, VMware ESX 4.1 Classic (Agent only)
- Solaris:
o Solaris 10,
o Solaris 11 Express
- FreeBSD:
o FreeBSD 7.4, 8.2
2.2 Minimum System Requirements
o Pentium Compatible 1.2 GHz processor, or equivalent
o 512 MB RAM
o 135 MB hard disk drive space
o Greater than 256 color video mode
2.3 General Setup Notes
- You can configure Adaptec Storage Manager settings on other
servers exactly as they are configured on one server. To
replicate the Adaptec Storage Manager Enterprise view tree
and notification list, do the following:
1. Install Adaptec Storage Manager on one server.
2. Start Adaptec Storage Manager. Using the 'Add remote system'
action, define the servers for your tree.
3. Open the Notification Manager. Using the 'Add system'
action, define the notification list.
4. Exit Adaptec Storage Manager.
5. Copy the following files onto a diskette from the directory
where the Adaptec Storage Manager is installed:
RaidMSys.ser --> to replicate the tree
RaidNLst.ser --> to replicate the notification list
RaidSMTP.ser --> to replicate the SMTP e-mail notification list
RaidJob.ser --> to replicate the jobs in the Task Scheduler
6. Install Adaptec Storage Manager on the other servers.
7. Copy the files from the diskette into the directory where
Adaptec Storage Manager is installed on the other servers.
2.4 Linux Setup Notes
- Because the RPM for Red Hat Enterprise Linux 5 is unsigned, the
installer reports that the package is "Unsigned, Malicious Software".
Ignore the message and continue the installation.
- To run Adaptec Storage Manager under Red Hat Enterprise Linux for
x64, the Standard installation with "Compatibility Arch Support"
is required.
- To install Adaptec Storage Manager on Red Hat Enterprise Linux,
you must install two packages from the Red Hat installation CD:
o compat-libstdc++-7.3-2.96.122.i386.rpm
o compat-libstdc++--devel-7.3-2.96.122.i386.rpm
NOTE: The version string in the file name may be different
from above. Be sure to check the version string on the
Red Hat CD.
For example, type:
rpm --install /mnt/compat-libstdc++-7.3-2.96.122.i386.rpm
where mnt is the mount point of the CD-ROM drive.
- To install Adaptec Storage Manager on Red Hat Enterprise Linux 5,
you must install one of these packages from the Red Hat
installation CD:
o libXp-1.0.0-8.i386.rpm (32-Bit)
o libXp-1.0.0-8.x86.rpm (64-Bit)
- To install Adaptec Storage Manager on SuSE Linux Enterprise
Desktop 9, Service Pack 1, for 64-bit systems, you must install
two packages from the SuSE Linux installation CD:
- liblcms-devel-1.12-55.2.x86_64.rpm
- compat-32bit-9-200502081830.x86_64.rpm
NOTE: The version string in the file name may be different
from above. Be sure to check the version string on the
installation CD.
- To enable ASM's hard drive firmware update feature on RHEL 64-bit
systems, you must ensure that the "sg" module is loaded in the
kernel. To load the module manually (if it is not loaded already),
use the command "modprobe sg".
2.5 Debian Linux Setup Notes
- You can use the ASM GUI on Debian Linux 5.x only if you install
the GNOME desktop. Due to a compatibility issue with X11, the
default KDE desktop is not supported in this release.
- To ensure that the ASM Agent starts automatically when Debian
is rebooted, you must update the default start and stop values
in /etc/init.d/stor_agent, as follows:
·[Original]
# Default-Start: 2 3 5
# Default-Stop: 0 1 2 6
·[Modification]
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
To activate the changes, execute 'insserv stor_agent', as root.
--------------------------------------------------------------------
3. Adaptec Storage Manager General Cautions and Notes
3.1 General Cautions
- This release supports a maximum of 8 concurrent online capacity
expansion (OCE) tasks in the RAID array migration wizard.
- While building or clearing a logical drive, do not remove and
re-insert any drive from that logical drive. Doing so may cause
unpredictable results.
- Do not move disks comprising a logical drive from one controller
to another while the power is on. Doing so could cause the loss of
the logical drive configuration or data, or both. Instead, power
off both affected controllers, move the drives, and then restart.
- When using Adaptec Storage Manager and the CLI concurrently,
configuration changes may not appear in the Adaptec Storage
Manager GUI until you refresh the display (by pressing F5).
3.2 General Notes
- Adaptec Storage Manager requires the following range of ports
to be open for remote access: 34570-34580 (TCP), 34570 (UDP),
34577-34580 (UDP).
- Adaptec Storage Manager generates log files automatically to
assist in tracking system activity. The log files are
created in the directory where Adaptec Storage Manager is
installed.
o RaidEvt.log - Contains the information reported in
Adaptec Storage Manager event viewer for all
local and remote systems.
o RaidEvtA.log - Contains the information reported in
Adaptec Storage Manager event viewer for the
local system.
o RaidNot.log - Contains the information reported in the
Notification Manager event viewer.
o RaidErr.log - Contains Java messages generated by
Adaptec Storage Manager.
o RaidErrA.log - Contains Java messages generated by the
Adaptec Storage Manager agent.
o RaidCall.log - Contains the information reported when
statistics logging is enabled in ASM.
Information written to these files is appended to the existing
files to maintain a history. However, when an error log file
reaches a size of 5 Mbytes, it is copied to a new file with
the extension .1 and the original (that is, the .LOG file) is
deleted and recreated. For other log files, a .1 file is created
when the .LOG file reaches a size of 1 Mbyte. If a .1 file already
exists, the existing .1 file is destroyed.
- In the Event viewer, Adaptec Storage Manager reports both the
initial build task for a logical drive and a subsequent Verify/Fix
as a "Build/Verify" task.
- When displaying information about a physical device, the device,
vendor and model information may be displayed incorrectly.
- After using a hot spare to successfully rebuild a redundant
logical drive, Adaptec Storage Manager will continue to
show the drive as a global hot spare. To remove the hot spare
designation, delete it in Adaptec Storage Manager.
--------------------------------------------------------------------
4. Operating System-Specific Issues and Notes
4.1 Windows - All
- The Java Virtual Machine has a problem with the 256-color
palette. (The Adaptec Storage Manager display may be distorted
or hard to read.) Set the Display Properties Settings to a
color mode with greater than 256 colors.
- When you shut down Windows, you might see the message
"unexpected shutdown". Windows displays this message if the
Adaptec Storage Manager Agent fails to exit within 3 seconds.
It has no effect on file I/O or other system operations and can
be ignored.
4.2 Windows 64-Bit
- Adaptec RAID controllers do not produce an audible alarm on the
following 64-bit Windows operating systems:
o Windows Server 2003 x64 Edition (all versions)
4.3 Linux
- When you delete a logical drive, the operating system can no longer
see the last logical drive. WORKAROUND: To allow Linux to see the
last logical drive, restart your system.
- The controller does not support attached CD drives during OS
installation.
- On certain versions of Linux, you may see messages concerning font
conversion errors. Font configuration under X-Windows is a known
JVM problem. It does not affect the proper operation of the
Adaptec Storage Manager software. To suppress these messages,
add the following line to your .Xdefaults file:
stringConversionWarnings: False
4.4 Debian and Ubuntu
- To create logical drives on Debian and Ubuntu installations, you
must log in as root. It is not sufficient to start ASM with the
'sudo /usr/StorMan/StorMan.sh' command (when not logged in as
root). WORKAROUND: To create logical drives on Ubuntu when not
logged in as root, install the package 'sudo dpkg -i storm_6.50-15645_amd64.deb'.
4.5 FreeBSD
- On FreeBSD systems, JBOD disks created with Adaptec Storage Manager
are not immediately available to the OS. You must reboot the
system before you can use the JBOD.
4.6 Fedora and FreeBSD
- Due to an issue with the Java JDialog Swing class, the 'Close'
button may not appear on some Adaptec Storage Manager windows
or dialog boxes under FreeBSD or Fedora Linux 15 or higher.
WORKAROUND: Press ALT+F4 or right-click on the title bar, then
close the dialog box from the pop-up menu.
4.7 Linux and FreeBSD
- If you cannot connect to a local or remote Adaptec Storage Manager
installed on a Linux or FreeBSD system, verify that the TCP/IP hosts
file is configured properly.
1. Open the /etc/hosts file.
NOTE: The following is an example:
127.0.0.1 localhost.localdomain localhost matrix
2. If the hostname of the system is identified on the line
with 127.0.0.1, you must create a new host line.
3. Remove the hostname from the 127.0.0.1 line.
NOTE: The following is an example:
127.0.0.1 localhost.localdomain localhost
4. On a new line, type the IP address of the system.
5. Using the Tab key, tab to the second column and enter the
fully qualified hostname.
6. Using the Tab key, tab to the third column and enter the
nickname for the system.
NOTE: The following is an example of a completed line:
1.1.1.1 matrix.localdomain matrix
where 1.1.1.1 is the IP address of the server and
matrix is the hostname of the server.
7. Restart the server for the changes to take effect.
4.8 VMware
- If you are unable to connect to VMware ESX Server from a
remote ASM GUI, even though it appears in the Enterprise
View as a remote system, most likely, some required ports
are open and others are not. (The VMware ESX firewall blocks
most ports, by default.) Check to make sure that all ports
34570 thru 34581 are opened on the ESX server.
- After making array configuration changes in VMware, you must
run the "esxcfg-rescan" tool manually at the VMware console
to notify the operating system of the new target characteristics
and/or availability. Alternatively, you can rescan from the
Virtual Infrastructure Client: click on the host in the left
panel, select the Configuration tab, choose "Storage Adapters",
then, on the right side of the screen, click "Rescan".
- With VMware ESX 4.1, the OS command 'esxcfg-scsidevs -a'
incorrectly identifies the Adaptec ASR-5445 controller as
"Adaptec ASR5800". (ASM itself identifies the controller
correctly.) To verify the controller name at the OS level,
use this command to check the /proc file system:
# cat /proc/scsi/aacraid/<Node #>
where <Node #> is 1, 2, 3 etc.
--------------------------------------------------------------------
5. RAID Level-Specific Notes
5.1 RAID 1 and RAID 5 Notes
- During a logical device migration from RAID 1 or RAID 5 to
RAID 0, if the original logical drive had a spare drive
attached, the resulting RAID 0 retains the spare drive.
Since RAID 0 is not redundant, you can remove the hot spare.
5.2 RAID 10 Notes
- If you force online a failed RAID 10, ASM erroneously shows two
drives rebuilding (the two underlying member drives), not one.
- You cannot change the priority of a RAID 10 verify. Setting
the priority at the start of a verify has no effect. The
priority is still shown as high. Changing the priority of
a running verify on a RAID-10 changes the displayed priority
until a rescan is done, then the priority shows as high again.
- Performing a Verify or Verify/Fix on an RAID 10 displays the
same message text in the event log: "Build/Verify started on
second level logical drive of 'LogicalDrive_0.'" You may see the
message three times for a Verify, but only once for a Verify/Fix.
5.3 RAID x0 Notes
- To create a RAID x0 with an odd number of drives (15, 25, etc),
specify an odd number of second-level devices in the Advanced
settings for the array. For a 25 drive RAID 50, for instance,
the default is 24 drives.
NOTE: This differs from the BIOS utility, which creates RAID x0
arrays with an odd number of drives by default.
- After building or verifying a leg of a second-level logical drive,
the status of the second-level logical drive is displayed as a
"Quick Initialized" drive.
5.4 RAID Volume Notes
- In ASM, a failed RAID Volume comprised of two RAID 1 logical
drives is erroneously reported as a failed RAID 10. A failed
RAID Volume comprised of two RAID 5 logical drives is
erroneously reported as a failed RAID 50.
5.5 JBOD Notes
- In this release, ASM deletes partitioned JBODs without issuing
a warning message.
- When migrating a JBOD to a Simple Volume, the disk must be quiescent
(no I/O load). Otherwise, the migration will fail with an I/O Read
error.
5.6 Hybrid RAID Notes
- ASM supports Hybrid RAID 1 and RAID 10 logical drives comprised
of hard disk drives and Solid State Drives (SSDs). For a Hybrid
RAID 10, you must select an equal number of SSDs and HDDs in
“every other drive” order, that is: SSD—HDD—SSD—HDD, and so on.
Failure to select drives in this order creates a standard
logical drive that does not take advantage of SSD performance.
5.7 RAID-Level Migration (RLM) Notes
- We strongly recommend that you use the default 256KB stripe
size for all RAID-level migrations. Choosing a different stripe
size may crash the system.
- If a disk error occurs when migrating a 2TB RAID 0 to RAID 5
(eg, bad blocks), ASM displays a message that the RAID 5 drive
is reconfiguring even though the migration failed and no
RAID-level migration task is running. To recreate the
logical drive, fix or replace the bad disk, delete the RAID 5
in ASM, then try again.
- When migrating a RAID 5EE, be careful not to remove and re-insert
a drive in the array. If you do, the drive will not be included
when the array is rebuilt. The migration will stop and the drive
will be reported as Ready (not part of array).
NOTE: We strongly recommend that you do not remove and re-insert
any drive during a RAID-level migration.
- When migrating a RAID 6 to a RAID 5, the migration will fail if
the (physical) drive order on the target logical device differs
from the source; for instance, migrating a four-drive RAID 6 to
a three-drive RAID 5.
- Migrating a RAID 5 with greater than 2TB capacity to RAID 6 or
RAID 10 is not supported in this release. Doing so may crash
the system.
- When migrating from a RAID 0 to any redundant logical drive,
like RAID 5 or 10, Adaptec Storage Manager shows the status as
"Degraded Reconfiguring" for a moment, then the status changes
to "Reconfiguring." The "Degraded" status does not appear in
the event log.
- The following RAID-level migrations and online capacity
expansions (OCE) are NOT supported:
o RAID 50 to RAID 5 RLM
o RAID 60 to RAID 6 RLM
o RAID 50 to RAID 60 OCE
- During a RAID-level migration, ASM and the BIOS utility show
different RAID levels while the migration is in progress. ASM shows
the target RAID level; the BIOS utility shows the current RAID level.
- If a disk error occurs during a RAID-level migration (eg, bad blocks),
the exception is reported in the ASM event viewer (bottom pane)
and in the support archive file (Support.zip, Controller 1 logs.txt),
but not in the main ASM Event Log file, RaidEvtA.log.
- Always allow a RAID-level migration to complete before gathering
support archive information in Support.zip. Otherwise, the Support.zip
file will include incorrect partition information. Once the RLM is
complete, the partition information will be reported correctly.
--------------------------------------------------------------------
6. Power Management Issues and Notes
- You must use a compatible combination of Adaptec Storage Manager
and controller firmware and driver software to use the power
management feature. All software components must support power
management. You can download the latest controller firmware
and drivers from the Adaptec Web site at www.adaptec.com.
- Power management is not supported under FreeBSD.
- Power management settings apply only to logical drives in the
Optimal state. If you change the power settings on a Failed
logical drive, then force the drive online, the previous
settings are reinstated.
- After setting power values for a logical drive in ARCCONF, the
settings are not updated in the Adaptec Storage Manager GUI.
--------------------------------------------------------------------
7. "Call Home" Issues and Notes
- The Call Home feature is not supported in this release. To gather
statistics about your system for remote analysis, enable statistics
logging in ASM, then create a Call Home Support Archive. For more
information, see the user's guide.
--------------------------------------------------------------------
8. ARCCONF Issues and Notes
- With VMware ESX 4.1, you cannot delete a logical drive
with ARCCONF. WORKAROUND: Connect to the VMware machine from a
remote ASM GUI, then delete the logical drive.
- With Linux kernel versions 2.4 and 2.6, the ARCCONF
DELETE <logical_drive> command may fail with a Kernel Oops
error message. Even though the drives are removed from the
Adaptec Storage Manager GUI, they may not really be deleted.
Reboot the controller; then, issue the ARCCONF DELETE command
again.
--------------------------------------------------------------------
9. Other Issues and Notes
- Some solid state drives identify themselves as ROTATING media.
As a result, these SSDs:
o Appear as SATA drives in the ASM Physical Devices View
o Cannot be used as Adaptec maxCache devices
o Cannot be used within a hybrid RAID array (comprised of
SSDs and hard disks)
- The blink pattern on Adaptec Series 6/6Q/6E/6T controllers differs
from Series 2 and Series 5 controllers:
o When blinking drives in ASM, the red LED goes on and stays solid;
on Series 2 and 5 controllers, the LED blinks on and off.
o When failing drives in ASM (using the 'Set drive state to failed'
action), the LED remains off; on Series 2 and 5 controllers, the
LED goes on and remains solid.
- Cache settings for RAID Volumes (Read cache, Write cache, maxCache)
have no effect. The cache settings for the underlying logical
devices take priority.
- On rare occasions, ASM will report invalid medium error counts on
a SATA hard drive or SSD. To correct the problem, use ARCCONF to
clear the device counts. The command is:
arcconf getlogs <Controller_ID> DEVICE clear
- On rare occasions, ASM lists direct-attached hard drives and SSDs
as drives in a virtual SGPIO enclosure. Normally, the drives are
listed in the Physical Devices View under ports CN0 and CN1.
- Hard Drive Firmware Update Wizard:
o Firmware upgrade on Western Digital WD5002ABYS-01B1B0 hard drives
is not supported for packet sizes below 2K (512/1024).
o After flashing the firmware of a Seagate Barracuda ES ST3750640NS
hard drive, you MUST cycle the power before ASM will show the new
image. You can pull out and re-insert the drive; power cycle the
enclosure; or power cycle the system if the drive is attached directly.
- Secure Erase:
o If you reboot the system while a Secure Erase operation is in
progress, the affected drive may not be displayed in Adaptec
Storage Manager or other Adaptec utilities, such as the ACU.
o You can perform a Secure Erase on a Solid State Drive (SSD) to
remove the metadata. However, the drive will move to the Failed
state when you reboot the system. To use the SSD, reboot to
the BIOS, then initialize the SSD. After initialization, the SSD
will return to the Ready state. (A SSD in the Failed state cannot
be initialized in ASM.)
- The Repair option in the ASM Setup program may fail to fix a
corrupted installation, depending on which files are affected.
The repair operation completes successfully, but the software
remains unfixed.
- Adaptec Storage Manager may fail to exit properly when you create
64 logical devices in the wizard. The logical devices are still
created, however.
- The "Clear logs on all controllers" action does not clear events
in the ASM Event Viewer (GUI). It clears device events, defunct
drive events, and controller events in the controllers' log files.
To clear events in the lower pane of the GUI, select Clear
configuration event viewer from the File menu.
- Stripe Size Limits for Large Logical Drives:
The stripe size limit for logical drives with more than 8 hard
drives is 512KB; for logical drives with more than 16 hard
drives it is 256KB.
- Agent Crashes when Hot-Plugging an Enclosure:
With one or more logical drives on an enclosure, removing
the enclosure cable from the controller side may crash
the ASM Agent.
--------------------------------------------------------------------
(c) 2012 PMC-Sierra, Inc. All Rights Reserved.
This software is protected under international copyright laws and
treaties. It may only be used in accordance with the terms
of its accompanying license agreement.
The information in this document is proprietary and confidential to
PMC-Sierra, Inc., and for its customers' internal use. In any event,
no part of this document may be reproduced or redistributed in any
form without the express written consent of PMC-Sierra, Inc.,
1380 Bordeaux Drive, Sunnyvale, CA 94089.
P/N DOC-01700-02-A Rev A

BIN
amd64/7.31/arcconf Executable file

Binary file not shown.

BIN
amd64/7.31/libstdc++.so.5 Executable file

Binary file not shown.

678
i386/7.31/README.TXT Normal file
View File

@ -0,0 +1,678 @@
--------------------------------------------------------------------
README.TXT
Adaptec Storage Manager (ASM)
as of May 7, 2012
--------------------------------------------------------------------
Please review this file for important information about issues
and erratas that were discovered after completion of the standard
product documentation. In the case of conflict between various
parts of the documentation set, this file contains the most
current information.
The following information is available in this file:
1. Software Versions and Documentation
1.1 Adaptec Storage Manager
1.2 Documentation
2. Installation and Setup Notes
2.1 Supported Operating Systems
2.2 Minimum System Requirements
2.3 General Setup Notes
2.4 Linux Setup Notes
2.5 Debian Linux Setup Notes
3. General Cautions and Notes
3.1 General Cautions
3.2 General Notes
4. Operating System-Specific Issues and Notes
4.1 Windows - All
4.2 Windows 64-Bit
4.3 Linux
4.4 Debian and Ubuntu
4.5 FreeBSD
4.6 Fedora and FreeBSD
4.7 Linux and FreeBSD
4.8 VMware
5. RAID Level-Specific Notes
5.1 RAID 1 and RAID 5 Notes
5.2 RAID 10 Notes
5.3 RAID x0 Notes
5.4 RAID Volume Notes
5.5 JBOD Notes
5.6 Hybrid RAID Notes
5.7 RAID-Level Migration Notes
6. Power Management Issues and Notes
7. "Call Home" Issues and Notes
8. ARCCONF Issues and Notes
9. Other Issues and Notes
--------------------------------------------------------------------
1. Software Versions and Documentation
1.1. Adaptec Storage Manager Version 7.3.1, ARCCONF Version 7.3.1
1.2. Documentation on this DVD
PDF format*:
- Adaptec Storage Manager User's Guide
- Adaptec RAID Controller Command Line Utility User's Guide
*Requires Adobe Acrobat Reader 4.0 or later
HTML and text format:
- Adaptec Storage Manager Online Help
- Adaptec Storage Manager README.TXT file
--------------------------------------------------------------------
2. Installation and Setup Notes
- The Adaptec Storage Manager User's Guide contains complete installation
instructions for the Adaptec Storage Manager software. The Adaptec
RAID Controllers Command Line Utility User's Guide contains
complete installation instructions for ARCCONF, Remote ARCCONF,
and the Adaptec CIM Provider. The Adaptec RAID Controllers
Installation and User's Guide contains complete installation
instructions for Adaptec RAID controllers and drivers.
2.1 Supported Operating Systems
- Microsoft Windows*:
o Windows Server 2008, 32-bit and 64-bit
o Windows Server 2008 R2, 64-bit
o Windows SBS 2011, 32-bit and 64-bit
o Windows Storage Server 2008 R2, 32-bit and 64-bit
o Windows Storage Server 2011, 32-bit and 64-bit
o Windows 7, 32-bit and 64-bit
*Out-of-box and current service pack
- Linux:
o Red Hat Enterprise Linux 5.7, 6.1, IA-32 and x64
o SuSE Linux Enterprise Server 10, 11, IA-32 and x64
o Debian Linux 5.0.7, 6.0 IA-32 and x64
o Ubuntu Linux 10.10, 11.10, IA-32 and x64
o Fedora Linux 14, 15, 16 IA-32 and x64
o Cent OS 5.7, 6.2, IA-32 and x64
o VMware ESXi 5.0, VMware ESX 4.1 Classic (Agent only)
- Solaris:
o Solaris 10,
o Solaris 11 Express
- FreeBSD:
o FreeBSD 7.4, 8.2
2.2 Minimum System Requirements
o Pentium Compatible 1.2 GHz processor, or equivalent
o 512 MB RAM
o 135 MB hard disk drive space
o Greater than 256 color video mode
2.3 General Setup Notes
- You can configure Adaptec Storage Manager settings on other
servers exactly as they are configured on one server. To
replicate the Adaptec Storage Manager Enterprise view tree
and notification list, do the following:
1. Install Adaptec Storage Manager on one server.
2. Start Adaptec Storage Manager. Using the 'Add remote system'
action, define the servers for your tree.
3. Open the Notification Manager. Using the 'Add system'
action, define the notification list.
4. Exit Adaptec Storage Manager.
5. Copy the following files onto a diskette from the directory
where the Adaptec Storage Manager is installed:
RaidMSys.ser --> to replicate the tree
RaidNLst.ser --> to replicate the notification list
RaidSMTP.ser --> to replicate the SMTP e-mail notification list
RaidJob.ser --> to replicate the jobs in the Task Scheduler
6. Install Adaptec Storage Manager on the other servers.
7. Copy the files from the diskette into the directory where
Adaptec Storage Manager is installed on the other servers.
2.4 Linux Setup Notes
- Because the RPM for Red Hat Enterprise Linux 5 is unsigned, the
installer reports that the package is "Unsigned, Malicious Software".
Ignore the message and continue the installation.
- To run Adaptec Storage Manager under Red Hat Enterprise Linux for
x64, the Standard installation with "Compatibility Arch Support"
is required.
- To install Adaptec Storage Manager on Red Hat Enterprise Linux,
you must install two packages from the Red Hat installation CD:
o compat-libstdc++-7.3-2.96.122.i386.rpm
o compat-libstdc++--devel-7.3-2.96.122.i386.rpm
NOTE: The version string in the file name may be different
from above. Be sure to check the version string on the
Red Hat CD.
For example, type:
rpm --install /mnt/compat-libstdc++-7.3-2.96.122.i386.rpm
where mnt is the mount point of the CD-ROM drive.
- To install Adaptec Storage Manager on Red Hat Enterprise Linux 5,
you must install one of these packages from the Red Hat
installation CD:
o libXp-1.0.0-8.i386.rpm (32-Bit)
o libXp-1.0.0-8.x86.rpm (64-Bit)
- To install Adaptec Storage Manager on SuSE Linux Enterprise
Desktop 9, Service Pack 1, for 64-bit systems, you must install
two packages from the SuSE Linux installation CD:
- liblcms-devel-1.12-55.2.x86_64.rpm
- compat-32bit-9-200502081830.x86_64.rpm
NOTE: The version string in the file name may be different
from above. Be sure to check the version string on the
installation CD.
- To enable ASM's hard drive firmware update feature on RHEL 64-bit
systems, you must ensure that the "sg" module is loaded in the
kernel. To load the module manually (if it is not loaded already),
use the command "modprobe sg".
2.5 Debian Linux Setup Notes
- You can use the ASM GUI on Debian Linux 5.x only if you install
the GNOME desktop. Due to a compatibility issue with X11, the
default KDE desktop is not supported in this release.
- To ensure that the ASM Agent starts automatically when Debian
is rebooted, you must update the default start and stop values
in /etc/init.d/stor_agent, as follows:
·[Original]
# Default-Start: 2 3 5
# Default-Stop: 0 1 2 6
·[Modification]
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
To activate the changes, execute 'insserv stor_agent', as root.
--------------------------------------------------------------------
3. Adaptec Storage Manager General Cautions and Notes
3.1 General Cautions
- This release supports a maximum of 8 concurrent online capacity
expansion (OCE) tasks in the RAID array migration wizard.
- While building or clearing a logical drive, do not remove and
re-insert any drive from that logical drive. Doing so may cause
unpredictable results.
- Do not move disks comprising a logical drive from one controller
to another while the power is on. Doing so could cause the loss of
the logical drive configuration or data, or both. Instead, power
off both affected controllers, move the drives, and then restart.
- When using Adaptec Storage Manager and the CLI concurrently,
configuration changes may not appear in the Adaptec Storage
Manager GUI until you refresh the display (by pressing F5).
3.2 General Notes
- Adaptec Storage Manager requires the following range of ports
to be open for remote access: 34570-34580 (TCP), 34570 (UDP),
34577-34580 (UDP).
- Adaptec Storage Manager generates log files automatically to
assist in tracking system activity. The log files are
created in the directory where Adaptec Storage Manager is
installed.
o RaidEvt.log - Contains the information reported in
Adaptec Storage Manager event viewer for all
local and remote systems.
o RaidEvtA.log - Contains the information reported in
Adaptec Storage Manager event viewer for the
local system.
o RaidNot.log - Contains the information reported in the
Notification Manager event viewer.
o RaidErr.log - Contains Java messages generated by
Adaptec Storage Manager.
o RaidErrA.log - Contains Java messages generated by the
Adaptec Storage Manager agent.
o RaidCall.log - Contains the information reported when
statistics logging is enabled in ASM.
Information written to these files is appended to the existing
files to maintain a history. However, when an error log file
reaches a size of 5 Mbytes, it is copied to a new file with
the extension .1 and the original (that is, the .LOG file) is
deleted and recreated. For other log files, a .1 file is created
when the .LOG file reaches a size of 1 Mbyte. If a .1 file already
exists, the existing .1 file is destroyed.
- In the Event viewer, Adaptec Storage Manager reports both the
initial build task for a logical drive and a subsequent Verify/Fix
as a "Build/Verify" task.
- When displaying information about a physical device, the device,
vendor and model information may be displayed incorrectly.
- After using a hot spare to successfully rebuild a redundant
logical drive, Adaptec Storage Manager will continue to
show the drive as a global hot spare. To remove the hot spare
designation, delete it in Adaptec Storage Manager.
--------------------------------------------------------------------
4. Operating System-Specific Issues and Notes
4.1 Windows - All
- The Java Virtual Machine has a problem with the 256-color
palette. (The Adaptec Storage Manager display may be distorted
or hard to read.) Set the Display Properties Settings to a
color mode with greater than 256 colors.
- When you shut down Windows, you might see the message
"unexpected shutdown". Windows displays this message if the
Adaptec Storage Manager Agent fails to exit within 3 seconds.
It has no effect on file I/O or other system operations and can
be ignored.
4.2 Windows 64-Bit
- Adaptec RAID controllers do not produce an audible alarm on the
following 64-bit Windows operating systems:
o Windows Server 2003 x64 Edition (all versions)
4.3 Linux
- When you delete a logical drive, the operating system can no longer
see the last logical drive. WORKAROUND: To allow Linux to see the
last logical drive, restart your system.
- The controller does not support attached CD drives during OS
installation.
- On certain versions of Linux, you may see messages concerning font
conversion errors. Font configuration under X-Windows is a known
JVM problem. It does not affect the proper operation of the
Adaptec Storage Manager software. To suppress these messages,
add the following line to your .Xdefaults file:
stringConversionWarnings: False
4.4 Debian and Ubuntu
- To create logical drives on Debian and Ubuntu installations, you
must log in as root. It is not sufficient to start ASM with the
'sudo /usr/StorMan/StorMan.sh' command (when not logged in as
root). WORKAROUND: To create logical drives on Ubuntu when not
logged in as root, install the package 'sudo dpkg -i storm_6.50-15645_amd64.deb'.
4.5 FreeBSD
- On FreeBSD systems, JBOD disks created with Adaptec Storage Manager
are not immediately available to the OS. You must reboot the
system before you can use the JBOD.
4.6 Fedora and FreeBSD
- Due to an issue with the Java JDialog Swing class, the 'Close'
button may not appear on some Adaptec Storage Manager windows
or dialog boxes under FreeBSD or Fedora Linux 15 or higher.
WORKAROUND: Press ALT+F4 or right-click on the title bar, then
close the dialog box from the pop-up menu.
4.7 Linux and FreeBSD
- If you cannot connect to a local or remote Adaptec Storage Manager
installed on a Linux or FreeBSD system, verify that the TCP/IP hosts
file is configured properly.
1. Open the /etc/hosts file.
NOTE: The following is an example:
127.0.0.1 localhost.localdomain localhost matrix
2. If the hostname of the system is identified on the line
with 127.0.0.1, you must create a new host line.
3. Remove the hostname from the 127.0.0.1 line.
NOTE: The following is an example:
127.0.0.1 localhost.localdomain localhost
4. On a new line, type the IP address of the system.
5. Using the Tab key, tab to the second column and enter the
fully qualified hostname.
6. Using the Tab key, tab to the third column and enter the
nickname for the system.
NOTE: The following is an example of a completed line:
1.1.1.1 matrix.localdomain matrix
where 1.1.1.1 is the IP address of the server and
matrix is the hostname of the server.
7. Restart the server for the changes to take effect.
4.8 VMware
- If you are unable to connect to VMware ESX Server from a
remote ASM GUI, even though it appears in the Enterprise
View as a remote system, most likely, some required ports
are open and others are not. (The VMware ESX firewall blocks
most ports, by default.) Check to make sure that all ports
34570 thru 34581 are opened on the ESX server.
- After making array configuration changes in VMware, you must
run the "esxcfg-rescan" tool manually at the VMware console
to notify the operating system of the new target characteristics
and/or availability. Alternatively, you can rescan from the
Virtual Infrastructure Client: click on the host in the left
panel, select the Configuration tab, choose "Storage Adapters",
then, on the right side of the screen, click "Rescan".
- With VMware ESX 4.1, the OS command 'esxcfg-scsidevs -a'
incorrectly identifies the Adaptec ASR-5445 controller as
"Adaptec ASR5800". (ASM itself identifies the controller
correctly.) To verify the controller name at the OS level,
use this command to check the /proc file system:
# cat /proc/scsi/aacraid/<Node #>
where <Node #> is 1, 2, 3 etc.
--------------------------------------------------------------------
5. RAID Level-Specific Notes
5.1 RAID 1 and RAID 5 Notes
- During a logical device migration from RAID 1 or RAID 5 to
RAID 0, if the original logical drive had a spare drive
attached, the resulting RAID 0 retains the spare drive.
Since RAID 0 is not redundant, you can remove the hot spare.
5.2 RAID 10 Notes
- If you force online a failed RAID 10, ASM erroneously shows two
drives rebuilding (the two underlying member drives), not one.
- You cannot change the priority of a RAID 10 verify. Setting
the priority at the start of a verify has no effect. The
priority is still shown as high. Changing the priority of
a running verify on a RAID-10 changes the displayed priority
until a rescan is done, then the priority shows as high again.
- Performing a Verify or Verify/Fix on an RAID 10 displays the
same message text in the event log: "Build/Verify started on
second level logical drive of 'LogicalDrive_0.'" You may see the
message three times for a Verify, but only once for a Verify/Fix.
5.3 RAID x0 Notes
- To create a RAID x0 with an odd number of drives (15, 25, etc),
specify an odd number of second-level devices in the Advanced
settings for the array. For a 25 drive RAID 50, for instance,
the default is 24 drives.
NOTE: This differs from the BIOS utility, which creates RAID x0
arrays with an odd number of drives by default.
- After building or verifying a leg of a second-level logical drive,
the status of the second-level logical drive is displayed as a
"Quick Initialized" drive.
5.4 RAID Volume Notes
- In ASM, a failed RAID Volume comprised of two RAID 1 logical
drives is erroneously reported as a failed RAID 10. A failed
RAID Volume comprised of two RAID 5 logical drives is
erroneously reported as a failed RAID 50.
5.5 JBOD Notes
- In this release, ASM deletes partitioned JBODs without issuing
a warning message.
- When migrating a JBOD to a Simple Volume, the disk must be quiescent
(no I/O load). Otherwise, the migration will fail with an I/O Read
error.
5.6 Hybrid RAID Notes
- ASM supports Hybrid RAID 1 and RAID 10 logical drives comprised
of hard disk drives and Solid State Drives (SSDs). For a Hybrid
RAID 10, you must select an equal number of SSDs and HDDs in
“every other drive” order, that is: SSD—HDD—SSD—HDD, and so on.
Failure to select drives in this order creates a standard
logical drive that does not take advantage of SSD performance.
5.7 RAID-Level Migration (RLM) Notes
- We strongly recommend that you use the default 256KB stripe
size for all RAID-level migrations. Choosing a different stripe
size may crash the system.
- If a disk error occurs when migrating a 2TB RAID 0 to RAID 5
(eg, bad blocks), ASM displays a message that the RAID 5 drive
is reconfiguring even though the migration failed and no
RAID-level migration task is running. To recreate the
logical drive, fix or replace the bad disk, delete the RAID 5
in ASM, then try again.
- When migrating a RAID 5EE, be careful not to remove and re-insert
a drive in the array. If you do, the drive will not be included
when the array is rebuilt. The migration will stop and the drive
will be reported as Ready (not part of array).
NOTE: We strongly recommend that you do not remove and re-insert
any drive during a RAID-level migration.
- When migrating a RAID 6 to a RAID 5, the migration will fail if
the (physical) drive order on the target logical device differs
from the source; for instance, migrating a four-drive RAID 6 to
a three-drive RAID 5.
- Migrating a RAID 5 with greater than 2TB capacity to RAID 6 or
RAID 10 is not supported in this release. Doing so may crash
the system.
- When migrating from a RAID 0 to any redundant logical drive,
like RAID 5 or 10, Adaptec Storage Manager shows the status as
"Degraded Reconfiguring" for a moment, then the status changes
to "Reconfiguring." The "Degraded" status does not appear in
the event log.
- The following RAID-level migrations and online capacity
expansions (OCE) are NOT supported:
o RAID 50 to RAID 5 RLM
o RAID 60 to RAID 6 RLM
o RAID 50 to RAID 60 OCE
- During a RAID-level migration, ASM and the BIOS utility show
different RAID levels while the migration is in progress. ASM shows
the target RAID level; the BIOS utility shows the current RAID level.
- If a disk error occurs during a RAID-level migration (eg, bad blocks),
the exception is reported in the ASM event viewer (bottom pane)
and in the support archive file (Support.zip, Controller 1 logs.txt),
but not in the main ASM Event Log file, RaidEvtA.log.
- Always allow a RAID-level migration to complete before gathering
support archive information in Support.zip. Otherwise, the Support.zip
file will include incorrect partition information. Once the RLM is
complete, the partition information will be reported correctly.
--------------------------------------------------------------------
6. Power Management Issues and Notes
- You must use a compatible combination of Adaptec Storage Manager
and controller firmware and driver software to use the power
management feature. All software components must support power
management. You can download the latest controller firmware
and drivers from the Adaptec Web site at www.adaptec.com.
- Power management is not supported under FreeBSD.
- Power management settings apply only to logical drives in the
Optimal state. If you change the power settings on a Failed
logical drive, then force the drive online, the previous
settings are reinstated.
- After setting power values for a logical drive in ARCCONF, the
settings are not updated in the Adaptec Storage Manager GUI.
--------------------------------------------------------------------
7. "Call Home" Issues and Notes
- The Call Home feature is not supported in this release. To gather
statistics about your system for remote analysis, enable statistics
logging in ASM, then create a Call Home Support Archive. For more
information, see the user's guide.
--------------------------------------------------------------------
8. ARCCONF Issues and Notes
- With VMware ESX 4.1, you cannot delete a logical drive
with ARCCONF. WORKAROUND: Connect to the VMware machine from a
remote ASM GUI, then delete the logical drive.
- With Linux kernel versions 2.4 and 2.6, the ARCCONF
DELETE <logical_drive> command may fail with a Kernel Oops
error message. Even though the drives are removed from the
Adaptec Storage Manager GUI, they may not really be deleted.
Reboot the controller; then, issue the ARCCONF DELETE command
again.
--------------------------------------------------------------------
9. Other Issues and Notes
- Some solid state drives identify themselves as ROTATING media.
As a result, these SSDs:
o Appear as SATA drives in the ASM Physical Devices View
o Cannot be used as Adaptec maxCache devices
o Cannot be used within a hybrid RAID array (comprised of
SSDs and hard disks)
- The blink pattern on Adaptec Series 6/6Q/6E/6T controllers differs
from Series 2 and Series 5 controllers:
o When blinking drives in ASM, the red LED goes on and stays solid;
on Series 2 and 5 controllers, the LED blinks on and off.
o When failing drives in ASM (using the 'Set drive state to failed'
action), the LED remains off; on Series 2 and 5 controllers, the
LED goes on and remains solid.
- Cache settings for RAID Volumes (Read cache, Write cache, maxCache)
have no effect. The cache settings for the underlying logical
devices take priority.
- On rare occasions, ASM will report invalid medium error counts on
a SATA hard drive or SSD. To correct the problem, use ARCCONF to
clear the device counts. The command is:
arcconf getlogs <Controller_ID> DEVICE clear
- On rare occasions, ASM lists direct-attached hard drives and SSDs
as drives in a virtual SGPIO enclosure. Normally, the drives are
listed in the Physical Devices View under ports CN0 and CN1.
- Hard Drive Firmware Update Wizard:
o Firmware upgrade on Western Digital WD5002ABYS-01B1B0 hard drives
is not supported for packet sizes below 2K (512/1024).
o After flashing the firmware of a Seagate Barracuda ES ST3750640NS
hard drive, you MUST cycle the power before ASM will show the new
image. You can pull out and re-insert the drive; power cycle the
enclosure; or power cycle the system if the drive is attached directly.
- Secure Erase:
o If you reboot the system while a Secure Erase operation is in
progress, the affected drive may not be displayed in Adaptec
Storage Manager or other Adaptec utilities, such as the ACU.
o You can perform a Secure Erase on a Solid State Drive (SSD) to
remove the metadata. However, the drive will move to the Failed
state when you reboot the system. To use the SSD, reboot to
the BIOS, then initialize the SSD. After initialization, the SSD
will return to the Ready state. (A SSD in the Failed state cannot
be initialized in ASM.)
- The Repair option in the ASM Setup program may fail to fix a
corrupted installation, depending on which files are affected.
The repair operation completes successfully, but the software
remains unfixed.
- Adaptec Storage Manager may fail to exit properly when you create
64 logical devices in the wizard. The logical devices are still
created, however.
- The "Clear logs on all controllers" action does not clear events
in the ASM Event Viewer (GUI). It clears device events, defunct
drive events, and controller events in the controllers' log files.
To clear events in the lower pane of the GUI, select Clear
configuration event viewer from the File menu.
- Stripe Size Limits for Large Logical Drives:
The stripe size limit for logical drives with more than 8 hard
drives is 512KB; for logical drives with more than 16 hard
drives it is 256KB.
- Agent Crashes when Hot-Plugging an Enclosure:
With one or more logical drives on an enclosure, removing
the enclosure cable from the controller side may crash
the ASM Agent.
--------------------------------------------------------------------
(c) 2012 PMC-Sierra, Inc. All Rights Reserved.
This software is protected under international copyright laws and
treaties. It may only be used in accordance with the terms
of its accompanying license agreement.
The information in this document is proprietary and confidential to
PMC-Sierra, Inc., and for its customers' internal use. In any event,
no part of this document may be reproduced or redistributed in any
form without the express written consent of PMC-Sierra, Inc.,
1380 Bordeaux Drive, Sunnyvale, CA 94089.
P/N DOC-01700-02-A Rev A

BIN
i386/7.31/arcconf Executable file

Binary file not shown.

BIN
i386/7.31/libstdc++.so.5 Executable file

Binary file not shown.