Welcome


Thursday, November 10, 2011

New P7 changed

Last week IBM announced some changes to AIX and the POWER7 lineup. Although the entry servers will still be known as the 710, 720, 730 and 740 and the enterprise servers will still be called the 770 and 780, they will all have new model and machine type numbers. This is intended to help customers differentiate the new servers from the old, though it's important to understand that these machines are not POWER7+.General availability is set for Oct. 21.Here are the new numbers:

Model Machine Type

710 8231-E1C
720 8202-E4C
730 8231-E2C
740 8205-E6C
770 9117-MMC
780 9179-MHC

All of the POWER7-enhanced systems end with the letter C, while of course the current models end in B, so it's easy to determine which system type you have.Another change made in the interest of clarity is that the 710 and 730 no longer share the same machine type and model. Also note that the 740 is no longer available as a tower -- it's exclusively rack-mounted
now.


Enhanced I/O Capabilities and Higher Memory Densities
 
The biggest changes in the hardware revolve around the enhanced I/O capabilities and the increased memory densities across the servers. The servers all benefit from PCIe Gen2, which, according to the announcement details that I saw, provides "twice the I/O bandwidth which will enable higher performance, greater efficiencies and more flexibility." if you're not driving your Gen1 PCIe adapters to the point where they become your bottleneck, simply switching to Gen2 won't magically give you better performance. However, you will get better utilization of the hardware going forward with Gen2.

PCIe Gen2 provides for more I/O ports available per adapter. You'll now see dual port 10G Ethernet cards and 4-port 8G fibre adapters. You'll be able to push SAS data out at 6G per second vs. the current generation's 3G per second. The new 5913 Large Cache SAS adapter has 1.8 GB cache and can drive up to 72 HDD or 26 SSD, or you can mix and match the drive types with this adapter. A huge improvement with this card is that it no longer has batteries, so you won't have to worry about replacing them. If it losespower the card will use a capacitor and write to flash memory. Note that this available from Oct. 31.

Gen2 allows you to more fully virtualize your systems by pushing more I/O with fewer adapters. With the new Gen2 adapters, you'll benefit whether it's fibre, SAS, networking or infiniband. Moving forward, we can stop thinking about PCI-X and concentrate solely on PCIe.
These new systems have more PCIe I/O slots in the CEC, with greater functionality per slot. The familiar IVE/HEA adapter is replaced with a standard 2-port 1 GB Ethernet card (on the entry systems) and an integrated multifunction card (on the enterprise machines). The latter consists of a 4-port card with two 10GB Ethernet ports and two 1 GB Ethernet ports, plus USB ports and a serial port.

There were four card slots in the entry level CEC; now the entry systems have five slots that can be populated, while the enterprise machines have six slots per CEC. Considering the optional half height cards that can be added to the 720 and 740, you can have up to 10 total cards by counting the standard Ethernet card that comes with the system (though you can't use another card in place of the Ethernet card in that slot).

This announcement also includes new DIMM sizes: 64 GB in the enterprise server space and 16 GB in the entry systems. This allows the new "C" models to have greater maximum memory: 128 GB on the 710, 256 GB on the 720 and 730, and 512 GB on the 740. The new 770 and 780 models can have up to 4 TB of memory in the 4 node system, 1 TB per CEC.

If you need even more cores, a 96-core large capacity 780 server is available. Imagine pairing up 96 cores and 4 TB of memory on your 780. In addition, a clock speed tweak brings the 770 to 3.3 and 3.7 GHz, depending on whether you chose six or eight core per socket. The 780 can max out at 3.92 GHz.


PowerVM and AIX Updates

A few changes are coming with PowerVM and AIX. Active memory mirroring is a feature where the hypervisor has two copies running at the same time, with both copies being updated simultaneously. In the (rare) event of a hypervisor memory failure on the primary copy, the second copy will be invoked with notification sent to IBM. This capability was previously available on the 795, but now with the new machines it comes standard on the 780 and as an option on the 770.

With AIX 7 TL1 expect to see a new feature called active system optimizer, which is designed to autonomically improve workload performance (AIX 7 on POWER7 only). A new network option you can set is called tcp_fastlo, which enables TCP fast loopback. This reduces TCP/IP overhead and lowers CPU utilization if two TCP endpoints are on the same frame (e.g., communication between two processes in the same LPAR).

Another new capability is active memory deduplication. It's available on the new machines running the new firmware, and is used in conjunction with active memory sharing. Active memory deduplication allows systems containing duplicate memory pages to remove those duplicates while fitting similar workloads within any physical memory constraints.
In addition, AIX features JFS2 filesystem enhancements that allow admins to tune performance by altering filesystem caching. This can be accomplished without having to unmount filesystems. Compared to earlier AIX releases, there's a 50 percent reduction in JFS2 memory usage for metadata.

Other software enhancements include:

A new logical volume manager option to retry failed I/O operations indefinitely. This capability can aid in recovery from transient failure of SANs, for instance. AIX 5.3 WPARs, which follow on the current AIX 5.2 WPAR offering. This allows you to run 5.3 workloads inside of AIX 7 into the future (i.e., even after IBM eventually ends its support of AIX 5.3). AIX 5.3 TL12 SP4 is required to make use of the 5.3 WPARs.

The new C models, supports AIX and VIOS

AIX 5.3 TL12 SP5
AIX 6.1 TL5 SP7
AIX 6.1 TL6 SP6
AIX 7.1 TL0 SP4
AIX 7.1 TL1
VIOS 2.2.1

PowerVM offers its own improvements. Live partition mobility operations can potentially run at twice the previous speed while performing up to eight LPM operations at once. Network balancing allows for load balancing across backup and primary shared Ethernet adapters. Shared storage pools are also enhanced. These PowerVM capabilities are available on the new VIO server. I'll definitely write much more on this soon.

There are also updates to PowerHA SystemMirror, including an SAP LiveCache Hot Standby solution, and PowerHA Federated Security, which provides for centralized administration via System Director, along with additional supported storage options to use with HA (including XIV, the V7000, the SVC and DS8800 and options from EMC, Hitachi and HP).

IBM planned in Feature: unattended installation (no-touch VIOS installation), GUI-based VIOS installs, VIOS setup and validation tools, and the capability to manage VIO servers as a pair rather than individually.

Cheers,
Guru

Wednesday, February 23, 2011

IVM & HMC difference

With IVM, dual VIO server configuration is not possible
- system power-on must be done by physically pushing the system power-on
button or remotely accessing ASMI
- Multiple systems can be managed via one HMC interface, but IVM can mange
only one system
- In Power5 some Power6 systems, you cannot assign physical adapters to
partitions by IVM
- Only one virtual SCSI adapter is supported for IVM-managed LPARs
- Multiple Shared Processor Pools are not supported on IVM-managed POWER6
systems

regards,
Guru

Tuesday, February 22, 2011

unable to find the boot device

While creating the bosboot or while listing the boot list we will get an error as "unable to find the boot device" or there will be no output while giving the command "bootinfo -b".
This is due to bugs on the OS(5.3 TL7 or 6.1 TL3/5 SP3) , please proceed the below steps.

Verify the /dev/ipldevice & /dev/ipl_blv
Verify the df -g (check for hd5)
use the command "ipl_varyon -i" command to verify the boot disk info
" You can able to see the rootvg's hdisk status as 'YES' "
if the ipldevice or ipl_blv is not there create the link as shown below
Link the rootvg's hdisk to ipldevice " ln /dev/rdsik# /dev/ipldevice"
Link the hd5 to ipl_blv "ln /dev/hd5 /dev/ipl_blv"
Now if everything looks good reboot the LPAR. It will resolve the issue

Note:-
Before doing any installation (TL upgrade, patching or OS migration) reboot the machine to be in the safer side.

Regards,
Guru