arcconf/amd64/1.08/README.TXT

693 lines
28 KiB
Plaintext

README.TXT
Adaptec maxView Storage Manager
Adaptec Command Line Interface Utility (ARCCONF)
as of May 19, 2015
--------------------------------------------------------------------
Please review this file for important information about issues
and errata that were discovered after completion of the standard
product documentation. In the case of conflict between various
parts of the documentation set, this file contains the most
current information.
The following information is available in this file:
1. New Features in this Release
2. Software Versions and Documentation
2.1 Utility Software
2.2 Documentation
3. Installation and Setup
3.1 Installation Instructions
3.2 Supported Operating Systems
3.3 General Setup Notes
3.4 Remote Access
3.5 Windows 8 Setup
3.6 SLES Setup
3.7 Ubuntu Setup
3.8 Bootable USB Image Security Warnings
3.9 RAW Device Setup
3.10 maxView Plugin for VMware vSphere Web Client
3.11 Firmware Downgrade on Series 8 Board Revision "B1"
3.12 Uninstallation Issues
4. Known Limitations
4.1 Login Issues on Windows Domain Server
4.2 Hot Spare Issues
4.3 Dual-Controller Systems
4.4 Email Notifications
4.5 SGPIO Enclosures
4.6 Non-RAID Mode Controllers
4.7 RAID-Level Migrations
4.8 maxCache Device Migration
4.9 ARCCONF maxCache Device Size Issue
4.10 Browser Issues
4.11 Remote System Issues
4.12 Power Management Issues
4.13 RAID 50/RAID 60, Max Drives
4.14 RAID 10 Segment Order
4.15 RAID 10 Rebuild Order
4.16 Verify with Fix
4.17 Locate Logical Drive Blink LED
4.18 ATA Secure Erase
4.19 ARCCONF Backward Compatibility
4.20 Adaptec Series 6 Controller Issues
4.21 Simple Volume Support
4.22 Auto-Volume Support
4.23 Hot-Removing Disk Drives on Xenserver Guest OS
4.24 Updating Hard Disk Firmware on VMware Guest OS
4.25 Creating a Support Archive on a Guest OS
4.26 Enclosure Status Reporting
4.27 PHY Status on Enclosure Backplanes
4.28 Special Characters in Logical Device Names
4.29 Speaker Status on SuperMicro SAS2X28 Enclosures
4.30 Changing Read and Write Cache Settings for a Logical Drive
4.31 Online Help Issues
--------------------------------------------------------------------
1. New Features in this Release
o Support for new operating systems
o Support for HDD/SSD temperature monitoring
o Improved SMART error logging for SAS and SATA devices
o Basecode logs and Flash Backup Module hard error logs included in support archive
o Enhanced maxView SMTP mail client to comply with highly secured SMTP servers (Gmail, Yahoo, etc)
o Open Pegasus 2.13 (maxView CIM Server) upgrade
o Re-design of maxView schedule dialog with advanced Primefaces calendar component
o Multi-level authentication and access privileges in maxView (Administrator/Standard user)
o Installer change: install GUI, Agent and CIM Server as separate components (saves 500MB RAM space, typical)
o Global Init/Uninit Physical Devices wizard
o Global Delete Logical Devices wizard
o Misc UI enhancements:
- Logical drive property tab shows initialization method
- Physical device temperature added to physical drive information
- Logical device long running task navigation from tree node to Tab to Task dialog
o ARCCONF changes:
- New commands: backupunit, driverupdate, setcustommode, setconnectormode
- Enhanced commands: identify command supports blink LED start/stop
o Bugfixes
--------------------------------------------------------------------
2. Software Versions and Documentation
2.1 Utility Software
o Adaptec maxView Storage Manager Version 1.08
o Adaptec ARCCONF Command Line Interface Utility Version 1.08
2.2 Documentation
PDF Format (English/Japanese):
o maxView Storage Manager User's Guide
o Adaptec RAID Controller Command Line Utility User's Guide
HTML and Text Format:
o maxView Storage Manager Online Help
o maxView Storage Manager README.TXT file
--------------------------------------------------------------------
3. Installation and Setup
3.1 Installation Instructions
The Adaptec SAS RAID Controllers Installation and User's Guide
contains complete installation information for the controllers
and drivers. The Adaptec RAID Controllers Command Line Utility
User's Guide contains complete installation information for
ARCCONF. The maxView Storage Manager User's Guide contains
complete installation information for the maxView Storage
Manager software.
3.2 Supported Operating Systems
Microsoft Windows:
o Windows Server 2012 R2, 64-bit
o Windows Server 2008, 32-bit and 64-bit
o Windows Server 2008 R2, 64-bit
o Windows SBS 2011, 64-bit
o Windows 7, 32-bit and 64-bit
o Windows 8, Windows 8.1, 32-bit and 64-bit
Linux:
o Red Hat Enterprise Linux 7.1, 64-bit
o Red Hat Enterprise Linux 6.6, 5.11, 32-bit and 64-bit
o SuSE Linux Enterprise Server 12, 64 bit
o SuSE Linux Enterprise Server 11, 10*, 32-bit and 64-bit
o Debian Linux 7.8, 32-bit and 64-bit
o Ubuntu Linux 14.10, 14.04.1, 12.04.3, 32-bit and 64-bit
o Fedora Linux 21, 32-bit and 64-bit
o CentOS 7.1, 64 bit
o CentOS 6.6, 5.11, 32-bit and 64-bit
*See Section 3.6 for SLES 10 setup issues
Virtual OS Environments:
o VMware ESXi 6.0, VMware ESXi 5.5
o Citrix XenServer 6.5 (32-bit GOS only)
Solaris:
o Solaris 10 U11
o Solaris 11.2
3.3 General Setup Notes
o maxView Storage Manager supports Adaptec Series 8, Adaptec
Series 7, and Adaptec Series 6 controllers. It is not backward
compatible with older Adaptec controller models.
o maxView Storage Manager is not supported on FreeBSD. Use
ARCCONF to create and manage arrays.
o maxView Storage Manager and legacy Adaptec Storage Manager (ASM)
cannot coexist on the same system.
3.4 Remote Access
maxView Storage Manager requires the following range of ports
to be open for remote access:
o 34570-34580 (TCP)
o 34570 (UDP)
o 34577-34580 (UDP)
See also Section 4.12 for OS-specific issues and workarounds.
3.5 Windows 8 Setup
To log in and use maxView Storage Manager on a Windows 8 system,
you must create a local user account; you cannot use your
MS Live account. To create a local user account:
1. Log into your MS Live account.
2. Select Settings->Change PC Settings->Users->Switch to Local user.
3. Provide account details.
4. Start maxView Storage Manager and log in with your local user
account credentials.
3.6 SLES Setup
o Due to a JRE conflict, maxView Storage Manager is not
supported on SLES 10 SP2/SP3 (x32, x64), and SLES 10 SP4 (x32).
WORKAROUND: Install SLES 10 SP4 64-bit then replace the 64-bit
JRE with a 32-bit JRE. Alternatively, use ARCCONF for storage
management on SLES 10 systems.
o To avoid a problem with launching maxView Storage Manager on
SLES 11 x64 systems with DHCP enabled, ensure that the
/etc/hosts file maps the server IP address to a valid host
name; it is not sufficient to map the IP address to 'localhost'.
3.7 Ubuntu Setup
o To avoid a maxView Login failure on Ubuntu systems, you must
ensure that the root user account is enabled. (It is disabled,
by default, on Ubuntu 14.04 and later because no password is set.)
For example: sudo bash; sudo passwd root
o When upgrading maxView Storage Manager on an existing Ubuntu
Linux x64 installation, you must enable the upgrade switch
before installing the maxView .deb package:
export maxView_Upgrade=true
dpkg -i StorMan-*.deb
To uninstall maxView after the upgrade:
export maxView_Upgrade=false
dpkg -r storman
3.8 Bootable USB Image Security Warnings
When running maxView Storage Manager from the bootable USB
image, you may be prompted with one or more with security
warnings before maxView launches. In each case, acknowledge
the warning and continue.
3.9 RAW Device Setup
On Adaptec Series 7 and Adaptec Series 8 controllers, a RAW
Pass Through device is analogous to a JBOD, supported by Adaptec
Series 6 controllers and older. Any drive without Adaptec RAID
metadata is exposed to the OS as a RAW Pass Through device. To
remove the Adaptec metadata and convert the drive to a RAW device,
use the Uninitialize command in maxView Storage Manager; any
existing data on the drive is destroyed. (You can also run
uninit from the BIOS or ARCCONF.) For more information about
working with RAW devices, see 'controller modes' in the CLI
User's Guide, and BIOS 'general settings' in the RAID Controller
Installation and User's Guide.
3.10 maxView Plugin for VMware vSphere Web Client
The maxView Plugin for VMware vSphere Web Client is supported
on VMware 5.5 only.
3.11 Firmware Downgrade on Series 8 Board Revision "B1"
Firmware downgrade from maxView Storage Manager, ARCCONF or the
BIOS is not supported on Adaptec Series 8 RAID controllers with
Board Revision "B1" (the newest board revision). Older firmware
versions do not support Board Revision "B1".
3.12 Uninstallation Issues
o When uninstalling maxView Storage Manager on Fedora Linux,
the OS displays a series of warning messages about missing
files. These messages can be ignored; the uninstallation
completes successfully.
WORKAROUND: To avoid the warnings, uninstall maxView with
'./StorMan-1.06-21032.i386.bin --remove' rather than
'rpm -e StorMan'.
o When using the 'Modify' option to uninstall maxView Storage
Manager on Windows, the 'Adaptec' folder is not removed from
the file system.
--------------------------------------------------------------------
4. Known Limitations
4.1 Login Issues on Windows Domain Server
If the maxView login account doesn't have local administrative
rights, login will fail with an error message: 'Invalid Username
or Password'. On domain servers running later versions of Microsoft
Windows Server, where no local administrator exists or can be
created, maxView login is not possible. Use ARCCONF to create
and manage arrays.
4.2 Hot Spare Issues
o After using a global hot spare to rebuild a redundant logical
drive, maxView Storage Manager displays an erroneous event
message: "Deleted dedicated hot spare drive for logical device".
In fact, the message should read "Logical device X is no longer
protected by hot spare drive Y".
o maxView Storage Manager allows you to un-assign a global hot
spare drive while copyback is in progess. This procedure is
not recommended.
4.3 Dual-Controller Systems
In dual-controller systems, the controller order in maxView
Storage Manager and the BIOS differs. Example: with an
Adaptec 72405 and 7805 installed, the BIOS reports
the 72405 as controller 1 and the 7805 as controller 2;
in the GUI, the controller order is reversed.
4.4 Email Notifications
o On Linux systems, we recommend adding the SMTP host name
to the /etc/hosts file. Doing so ensures that email
notifications will succeed if you specify the email
server in maxView Storage Manager by host name. Otherwise,
email notifications (including test messages) may fail if
the DNS is unable to resolve the host hame.
WORKAROUND: specify the email server in maxView Storage
Manager by IP address.
o On CentOS 5.9 x64, email notifications may not be sent for
logical drive creations, degraded logical drives, or logical
drives that are rebuilding or fully rebuilt.
4.5 SGPIO Enclosures
In this release, maxView Storage Manager does not show connector
information for SGPIO enclosures.
4.6 Non-RAID Mode Controllers
maxView Storage Manager can "see" RAID controllers operating
in HBA mode, Auto-Volume mode, and Simple Volume mode (Adaptec
Series 7 and Adaptec Series 8 controllers only). However, to
change the controller mode, you must use ARCCONF or the BIOS.
4.7 RAID-Level Migrations
o The following RAID-level migrations (RLM) are supported in
this release:
RAID 0 to RAID 5
RAID 0 to RAID 10
RAID 5 to RAID 6
RAID 6 to RAID 5
RAID 10 to RAID 5
RAID 5 to RAID 10
RAID 1 to RAID 5
SIMPLE VOLUME to RAID 1
RAID 1 to SIMPLE VOLUME
o When migrating a Simple Volume to RAID 1, maxView Storage
Manager reports the logical drive state as Impacted (rather
than Reconfiguring); this is normal.
o We do not recommend performing a RAID-level migration or
Online Capacity Expansion (OCE) on a logical drive with
maxCache SSD caching enabled.
NOTE: maxView Storage Manager grays out the options for logical
drives with maxCache enabled. ARCCONF terminates the task.
o Always allow a RAID-level migration to complete before creating
a support archive file. Otherwise, the support archive will
include incorrect partition information. Once the migration is
complete, the partition information will be reported correctly.
4.8 maxCache Device Migration
Only one maxCache Device is supported per controller. Moving
the maxCache Device (all underlying SSDs) from one
controller to another (assuming both controllers support
maxCache) is supported only if (1) the new controller does not
have a maxCache Device or any other maxCache Device with a
conflicting device number and (2) only after a clean shutdown
on the old controller.
4.9 ARCCONF maxCache Device Size Issue
ARCCONF supports >2TB maxCache Devices if you create the
device with the 'max' parameter. However, the functional
limit for the maxCache Device is 2TB, which is also the
limit in maxView Storage Manager.
4.10 Browser Issues
o To run maxView Storage Manager on the supported browsers,
Javascript must be enabled.
o Due to a certificate bug in Firefox 31.x, maxView login
may fail on RHEL systems with a "Secure Connection" error.
(Firefox 31.1 is the default browser on RHEL 6.6; on
RHEL 7.1, it is 31.4.)
WORKAROUND: Upgrade to Firefox 36.
o With the default Security setting in Microsoft Internet
Explorer 10 and 11, you may be unable to login to maxView
Storage Manager or experience certain GUI display issues
in the maxView online Help system.
WORKAROUND: Change the default Security setting from High
(or Medium-High) to Medium. Alternative: add the GUI IP
address to the trusted sites list.
o With Google Chrome, the scrollbar resets itself to the top
after selecting a drive in the Logical Drive wizard. To
select another drive, you must scroll back down to the
drive location.
o With Microsoft Internet Explorer 10, the controller
firmware update wizard does not show the f/w update
file name when the upload completes. To refresh the
display, click Next then Back.
o We do not recommend using multiple browsers simultaneously
on the same maxView instance. Doing so may cause display
issues or freezes; to restore maxView, refresh the
display by pressing F5.
4.11 Remote System Access on Linux and Windows
To avoid remote system access failures from Linux and Windows
clients running maxView Storage Manager, check and update one
or all of the following system and network settings:
Windows:
o Ensure that the DNS server information is properly configured
RHEL/Linux:
o Set server.properties file permissions to at least read-only at all levels
1. Stop all maxView services.
2. Set the Permissions of server.properties file to read and write or read-only
at all levels (Owner, Group and Others). Apply and close.
3. Restart all services in the given order - cim, agent, tomcat
4. Now try to remote login to this system from any other system
o Check/update these network settings:
1. Disable SELinux
2. Disable firewall.
3. Disable the ipv6 in the system, if ipconfig shows both ipv4 and ipv6 address.
4. Remove the virtual bridge virbr0, if present
5. Enter local ip address in 'localip' parameter in server.properties file
4.12 Power Management Issues
o Power management is not supported on FreeBSD.
o Capturing support logs from maxView or ARCCONF will spin up
drives when power management is active. This behavior is by design.
o For logical drives comprised of SSDs only, the Power Tab in
maxView Storage Manager is disabled.
o The Logical Drive Creation wizard does not save the power
management settings after the logical drive is created.
It is always disabled.
WORKAROUND: Enable power management for the logical drive
from the Set Properties window.
4.13 RAID 50/RAID 60, Max Drives
The maximum number of drives in a RAID 50 or RAID 60 differ
between maxView Storage Manager, ARCCONF, and the BIOS:
o BIOS and ARCCONF: 128 drives max
RAID 50 - From 2-16 legs with 3-32 drives/leg
RAID 60 - From 2-16 legs with 4-16 drives/leg
o maxView Storage Manager:
Assumes 2 legs for RAID 50/RAID 60 (non-selectable)
RAID 50 2-32 drives/leg (64 total)
RAID 60 2-16 drives/leg (32 total)
4.14 RAID 10 Segment Order
maxView Storage Manager and the Ctrl-A BIOS report the wrong
segment order for RAID 10s, regardless of the order in which
the drives are selected.
Example 1: Create RAID 10 with 2 SDDs and 2 HDDs in maxView Storage Manager:
(1a) ARCCONF and maxView Storage Manager see the following RAID segment order:
Device 2 (S1)
Device 1 (H2)
Device 3 (S2)
Device 0 (H1)
(1b) the BIOS/CTRL-A sees the following RAID segment order:
Device 2 (S1)
Device 1 (H2)
Device 0 (H1)
Device 3 (S2)
(1c) the correct and expected RAID segment order is:
Device 2 (S1)
Device 0 (H1)
Device 3 (S2)
Device 1 (H2)
Example 2: Create RAID 10 with 2 SDDs and 2 HDDs with ARCCONF:
(2a) the BIOS/CTRL-A sees the following RAID segment order:
Device 0 (H1)
Device 2 (S1)
Device 1 (H2)
Device 3 (S2)
(2b) ARCCONF and maxView Storage Manager see the correct RAID segment order:
Device 2 (S1)
Device 0 (H1)
Device 3 (S2)
Device 1 (H2)
4.15 RAID 10 Rebuild Order
With a degraded RAID 10 logical drive, the drive is rebuilt
one leg at a time, not in parallel.
4.16 Verify with Fix
In maxView Storage Manager and ARCCONF, the Verify with Fix
operation is NOT available when:
1. The logical drive has a non-redundant RAID level
2. Other tasks are in progress on the logical drive
3. The logical drive is in a non-optimal or impacted state
4.17 Locate Logical Drive Blink LED
In maxView Storage Manager, Locate Logical Drive continues to
blink the LED for a pulled physical drive in the array after
the locate action is stopped. (For unpulled drives, the blinking
stops.) This issue is not seen with ARCCONF.
4.18 ATA Secure Erase
In ARCCONF, the ATA Secure Erase operation cannot be aborted.
Once started, it continues to completion.
NOTE: ATA Secure Erase is also available in the Ctrl-A BIOS.
and maxView Storage Manager.
4.19 ARCCONF Backward Compatibility
ARCCONF is backward compatible with older Adaptec controller
models. As a result, the ARCCONF user's guide and online help
show command options that are not supported by newer Adaptec
controllers, like the Adaptec Series 7 and Adaptec Series 8.
Example:
o With ARCCONF SETMAXCACHE, Adaptec Series 7 and Series 8
controllers do not support ADDTOPOOL or REMOVEFROMPOOL
4.20 Adaptec Series 6 Controller Issues
The following issues are seen only with Adaptec Series 6 RAID
controllers:
o In maxView Storage Manager, the Preserve Cache option on the
Set Properties window is not supported on Series 6 RAID
controllers. Attempting to set this option for the Series 6
controller fails.
o Renaming a RAID volume disables the write-cache (if enabled).
You cannot re-enable the write-cache in maxView Storage Manager.
WORKAROUND: Use ARCCONF to enable the write-cache.
o In a VMware Guest OS under VMware 5.x, maxView Storage Manager
and ARCCONF do not detect existing logical drive partitions.
As a result, attempting to delete, clear, or erase the logical
drive may fail.
o On Series 6 controllers, maxView Storage Manager deletes
partitioned JBODs without issuing a warning message.
o Series 6 controllers do not support the ARCCONF GETPERFORM command.
4.21 Simple Volume Support
o In this release, you can create a maximum of 128 Simple
Volumes in maxView Storage Manager, ARCCONF, or the BIOS.
o When a Simple Volume fails, the status remains unchanged
after drive replacement.
WORKAROUND: Manually delete the Simple Volume to remove it.
4.22 Auto-Volume Support
o Changing a controller into Auto-Volume mode (ARCCONF/BIOS) is not
supported if the configuration includes any logical device type
other than Simple Volume, including a maxCache Device. The mode
switch from RAID mode to Auto-Volume mode is blocked if any other
type of logical volume exists (including maxCache). After
switching to Auto-Volume mode, you can create and delete Simple
Volumes only in maxView Storage Manager and ARCCONF.
o In Auto-Volume mode, only the first 128 RAW drives are converted
to Simple Volumes; the rest of the RAW drives remain unchanged.
If you uninitialize a Ready drive while the controller is in
Auto-Volume mode, the firmware converts the drive automatically
until the Simple Volume count reaches the maximum.
4.23 Hot-Removing Disk Drives on Xenserver Guest OS
XenServer does not support "hot-removal" of disk drives from a
partitioned logical drive. As a result, if you hot remove a disk
from a logical drive, the Guest OS becomes inaccessible because
the drive partition remains visible to the OS instead of being
cleared.
WORKAROUND: Reboot the XenServer host, detach the failed
partition, then restart the VM.
4.24 Updating Hard Disk Firmware on VMware Guest OS
Updating the firmware for a SAS hard disk drive with ARCCONF/maxView
can crash (PSOD) the VMware Guest OS. This issue is seen with SAS
hard drives only; with SATA drives, the firmware update completes
successfully.
4.25 Creating a Support Archive on a Guest OS
To create a support archive on a VMware or Xenserver Guest OS,
use maxView Storage Manager only. Creating a support archive
with ARCCONF is not supported in this release.
4.26 Enclosure Status Reporting
Enclosure status, in maxView Storage Manager, is event-driven.
As a result, enclosures can have a "Degraded" status even if
related resources (fan, temperature, power) are performing
normally (Optimal status). For instance, the Enclosure status
changes to "Degraded" if the system reports an "Enclosure device
not responding ..." event, even if other sensor values are normal.
4.27 PHY Status on Enclosure Backplanes
In the Controller Properties window, maxView Storage Manager shows
the Connector Info as "unknown" for all PHYs on an enclosure-based
backplane (for instance, a backplane attached to connector 1).
4.28 Special Characters in Logical Device Names
Special characters are permitted in logical device names
in maxView Storage Manager, the BIOS, and ARCCONF. However,
with Linux ARCCONF (create, setname), special characters
must be "escaped" to ensure proper interpretation. For
example:
ARCCONF SETNAME 1 LOGICALDRIVE 1 arc_ldrive%\$12\&
4.29 Speaker Status on SuperMicro SAS2X28 Enclosures
SuperMicro SAS2X28 enclosures do not propagate the speaker
status to maxView Storage Manager. As a result, maxView
always displays the speaker status as Off.
4.30 Changing Read and Write Cache Settings for a Logical Drive
maxView Storage Manager does not allow you to change the
read-cache and write-cache settings for a logical drive in
one step. You must click OK after each change.
4.31 Online Help Issues
When opening the maxView Storage Manager help from a remote
Linux system (eg, over a VPN), the help window may fail to
open with a 'can't establish connection to server' message.
WORKAROUND: replace 127.0.0.1:8443 in the URL with <system_ip_address>:8443
--------------------------------------------------------------------
(c) 2015 PMC-Sierra, Inc. All Rights Reserved.
This software is protected under international copyright laws and
treaties. It may only be used in accordance with the terms
of its accompanying license agreement.
The information in this document is proprietary and confidential to
PMC-Sierra, Inc., and for its customers' internal use. In any event,
no part of this document may be reproduced or redistributed in any
form without the express written consent of PMC-Sierra, Inc.,
1380 Bordeaux Drive, Sunnyvale, CA 94089.
P/N DOC-01768-05-A Rev A