VMWARE VSHPHERE 6 ARCHITECTURE
VMWARE VSHPHERE 6 ARCHITECTURE
vSphere 6.0 – New Configuration Maximums
VMware vsphere 6.0 introduced with lot of new enhancements and feature addition as compared with the previous versions of vSphere. In everyone’s mind, configuration maximum will be the first question during all vSphere releases. How big the platform supports? I would like give a Quick walk through about New configuration maximums of vSphere 6.0. Which Supports a Monster VM and your VM will be ready for Mission Critical Applications. Let’s take a detailed look at New Configuration Maximum’s available with VMware vSphere 6.0.
New Configuration Maximums of vSphere 6.0:
vSphere 6.0 Clusters now supports 64 Nodes and 6,000 VM’s (Which was 32 Nodes and 4,000 VM’s in vSphere 5.5)
vCenter Server Appliance (vCSA 6.0) supports upto 1000 Hosts and 10,000 Virtual Machines with embedded vPostgres database
ESXi 6.0 host now supports support up to 480 physical CPUs and 12 TB RAM (which was 320 CPUs and 4 TB in vSphere 5.5)
ESXi 6.0 host Supports 1000 VMs and 32 Serial Ports. (which was 512 VM per host in vSphere 5.5)
vSphere 6.0 VM’s will support up to 128 vCPUs and 4TB vRAM (which was 64 vCPU’s and 1 TB of memory in vSphere 5.5)
vSphere 6.0 continuous to support 64 Tb Datastores as same as in vSphere 5.5
Increased support for virtual graphics including Nvidia vGPU
Support for New Operating systems like FreeBSD 10.0 and Asianux 4 SP3
Fault Tolerance (FT) in vSphere 6.0 now supports upto 4 vCPUs (which was only 1 vCPU in vSphere 5.5)
Quick Comparison table between the Configuration maximums of vSphere 5.5 and vSphere 6.0:
vSphere 6.0 What’s New – Improved and Faster vSphere Web Client
vSphere 6.0 has been released with lot of new features and improvements to the existing vSphere version. vSphere Web client was introduced from vSphere 5.1 and Web Client is one of the biggest area, where all the system administrators were really looking for an improvement. VMware really considered the feedback from customers and partners about the vSphere web Client and now made an incredible changes to the vSphere web client. As compared to vSphere 5.0,5.5 and 5.5, below are the improvements:
Login time was improved 13 times faster
Right-click is improved 4 times faster.
One Click and navigate anywhere
Highly Customizable User Interface (Simply Drag and Drop)
Performance charts are available and usable in less than half of the time
VMRC is integrated and allows advanced VM operations
Tasks are placed at bottom:
Tasks are placed at the bottom as same as vSphere Client. Which allows you to see all your tasks. Look and feel is same as vSphere Client.
Improved Navigation:
One of the biggest issue in previous version of web Client was it’s difficulty in navigating the inventory items. Lot of core items like Hosts & Clusters, VM and Templates, Storage and networking are placed back in home page. New menu has been added to the top allows to access to inventory items from everywhere.
Redesigned Context Menus:
Context menus of web client has been redesigned as similar to vSphere Client.
Performance Comparison:
VMware Shows the detailed comparison between how web client 6.0 has been improved versus previous versions of vSphere Web Client. With vSphere 6.0, System Administrators will really enjoy the performance improvements of vSphere 6.0 Web Client.
VMware Fault Tolerance (FT)
is being one of my favorite feature but because of its vCPU limitation, It was not helping to protect the Mission Critical applications. With vSphere 6.0, VMware had broken the limitation lock of Fault Tolerance. FT VM now Supports up to 4 vCPUs and 64 GB of RAM (Which was 1 vCPu and 64 GB RAM in vSphere 5.5). With this vSMP support, Now FT can be used to protect your Mission Critical applications. Along with the vSMP FT support, There are lot more features has been added in FT with vSphere 6.0, Let’s take a look at what’s new in vSphere 6.0 Fault Tolerance(FT). VMWARE VSHPHERE 6 ARCHITECTURE
Benefits of Fault Tolerance
- NO TCP connections loss during failover
- Fault Tolerance is completely transparent to Guest OS.
- FT doesn’t depend on Guest OS and application
- Instantaneous Failover from Primary VM to Secondary VM in case of ESXi host failure
- Continuous Availablity with Zero downtime and Zero data loss
What’s New in vSphere 6.0 Fault Tolerance
FT support upto 4 vCPUs and 64 GB RAM
Fast Check-Pointing, a new Scalable technology is introduced to keep primary and secondary in Sync by replacing "Record-Replay"
vSphere 6.0, Supports vMotion of both Primary and Secondary Virtual Machine
With vSphere 6.0, You will be able to backup your virtual machines. FT supports for vStorage APIs for Data Protection (VADP) and it also supports all leading VADP solutions in Market like symantec, EMC, HP ,etc.
With vSphere 6.0, FT Supports all Virtual Disk Type like EZT, Thick or Thin Provisioned disks. It supports only Eager Zeroed Thick with vSphere 5.5 and earlier versions
Snapshot of FT configured Virtual Machines are supported with vSphere 6.0
New version of FT keeps the Separate copies of VM files like .VMX, .VMDk files to protect primary VM from both Host and Storage failures. You are allowed to keep both Primary and Secondary VM files on different datastore
Difference between vSphere 5.5 and vSphere 6.0 Fault Tolerance (FT)
vSphere 6.0 vMotion Enhancements
vSphere 6.0 vMotion Enhancements – vMotion Across vSwitches and vCenter Servers
vSphere 6.0 not only comes with great scalability but also with a various new features, which unlocks your existing limitations with the vMotion. With the earlier versions of vSphere, vMotion requires an exact similar network configuration between the ESXi hosts and also at the vSwitch level. In previous versions of vSphere, we were not allowed to perform the vMotion between the vSphere distributed Switches. It was only limited within the dvswitch. with vSphere 6.0, vMotion is allowed across vSwitches and even vCenter Servers. Let’s take a detailed look at vSphere 6.0 vMotion enhancements.
vMotion Across Virtual Switches
vMotion is no longer restricted by the network configured with vSwitch. with vSphere 6.0, It is possible to perform vMotion across Virtual switches (Standard switch or Distributed Switch),Which transfers all the VDS port metadata during the vMotion. It is entirely
transparent to the Guest VM’s and No downtime is required to perform this vMotion operation across vSwitches. Only one requirement for the vMotion across vSwitches is that you should have L2 VM Connectivity.
With vSphere 6.0, It is possible to perform vMotion of VM’s in 3 different ways:
vMotion of VMs from Standard switch to Standard switch (VSS to VSS)
vMotion of VMs from Standard switch to Distributed Switch (VSS to VDS)
vMotion of VMs from Distributed Switch to Distributed switch (VDS to VDS)
vMotion Across vCenter Servers
With vSphere 6.0, vMotion across vCenter server allows you to simultaneously change the Compute, Storage, Networks and management. It leverage the vMotion with unshared Storage. In simple terms, VM1 is running on certain Host/Cluster running on certain Datastore and managed by vCenter 1 can be vMotioned to different ESXi host having different datastores managed by another vCenter server called vCenter 2.
Graphic Thanks to VMware.com
VMWARE VSHPHERE 6 ARCHITECTURE
Requirement for vMotion Across vCenter Servers:
Support for vMotion across vCenter server supports from vSphere 6.0 and later versions
Destination vCenter server instance should have same SSO domain as source vCenter and this operation is possible via UI. Using API, it is possible with different SSO domain.
250 Mbps network bandwidth per vMotion operation
Properties of vMotion Across vCenter Servers:
Same VM UUID is maintained across vCenter Server instances
All the VM related historical data like Events, Alarms and Tasks are preserved after the vMotion operation
HA properties are Preserved and DRS anti-affinity rules are honored during the vMotion operation
Long Distance vMotion
With vSphere 6.0, vMotion for Long-Distance supports upto
100+ms RTTs(which was only 10 ms in previous versions). Long-Distance vMotion allows you to vMotion your VMs from one datacenter to other datacenter of your organization. Below are few of the use cases of the Long Distance vMotion:
SRM/DA testing
Permanent migrations
Disaster avoidance
Multi-site load balancing
Migration between Datacenters or Cloud Platform
Network Requirements of Long Distance vMotion:
All the vCenters servers must connect via Layer 3 Network.
VM network should have L2 connectivity and same VM IP address available at destination location
vMotion network should have L3 connectivity and 250 MBps per vMotion Operation
For NFC network,routed L3 through Management Network or L2 connection
For Networking, L4-L7 services manually configured at destination
VMWARE VSHPHERE 6 ARCHITECTURE
vSphere 6.0 – What’s New in vCenter Server 6.0
In vSphere 6.0, you will notice considerably new terms, when installation vCenter Server 6.0. As similar to the previous versions of vCenter Deployment, You can install vCenter Server on a host machine running Microsoft Windows Server 2008 SP2 or later or you can deploy vCenter Server Appliance (VCSA). With vSphere 6.0, There are 2 different new vCenter Deployment Models.
vCenter with an embedded Platform Services Controller
vCenter with an external Platform Services Controller
One of the Considerable Change, you will notice with vCenter Server installation is deployment models and embedded database.
Embedded database has been changed from SQL express edition to vFabric Postgres database. vFabric Postgres databaseembedded with vCenter installer is suitable for the environments with up to 5 hosts and 50 virtual machines and vCenter 6.0 continuous to support Microsoft and Oracle Database as external database. Let’s review the System requirements to install vCenter 6.0:
Supported Windows Operation System for vCenter 6.0 Installation:
Microsoft Windows Server 2008 SP2 64-bit
Microsoft Windows Server 2008 R2 64-bit
Microsoft Windows Server 2008 R2 SP1 64-bit
Microsoft Windows Server 2012 64-bit
Microsoft Windows Server 2012 R2 64-bit
Supported Databases for vCenter 6.0 Installation:
Microsoft SQL Server 2008 R2 SP1
Microsoft SQL Server 2008 R2 SP2
Microsoft SQL Server 2012
Microsoft SQL Server 2012 SP1
Microsoft SQL Server 2014
Oracle 11g R2 11.2.0.4
Oracle 12c
VMWARE VSHPHERE 6 ARCHITECTURE
Components of vCenter Server 6.0:
There are two Major Components of vCenter 6.0:
vCenter Server: vCenter Server product, that contains all of the products such as vCenter Server, vSphere Web Client,Inventory Service, vSphere Auto Deploy, vSphere ESXi Dump Collector, and vSphere Syslog Collector
VMware Platform Services Controller: Platform Services Controller contains all of the services necessary for running the products, such as vCenter Single Sign-On, License Service, Lookup Service, and VMware Certificate Authority
vCenter 6.0 Deployment Models:
vSphere 6.0 introduces vCenter Server with two deployment model. vCenter with external Platform Services Controller and vCenter Server with an embedded Platform Services Controller.
vCenter with an embedded Platform Services Controller:
All services bundled with the
Platform Services Controller are deployed on the same host machine as vCenter Server. vCenter Server with an embedded Platform Services Controller is suitable for smaller environments with eight or less product instances.
vCenter with an external Platform Services Controller:
The services bundled with the
Platform Services Controller and vCenter Server are deployed on different host machines.You must deploy the VMware Platform Services Controller first on one virtual machine or host and then deploy vCenter Server on another virtual machine or host. The Platform Services Controller can be shared across many products. This configuration is suitable for larger environments with nine or more product instances. VMWARE VSHPHERE 6 ARCHITECTURE
vSphere 6.0 – What’s New in vCenter Server Appliance(vCSA) 6.0
vCenter Server Appliance (vCSA) is a Security hardened base Suse (SLES 11 SP3) operating system packaged with vCenter server and
vFabric Postgres database. vCSA appliance supports Oracle as external database. vCenter Server Appliance contains all of the necessary services for running vCenter Server 6.0 along with its components. As an alternative to installing vCenter Server on a Windows host machine, you can deploy the vCenter Server Appliance. It helps you quickly deploy vCenter Server without spending time to prepare windows operating system for vCenter server Installation. vCSA now supports most of the features which is supported with windows version of vCenter server.
What’s New with vCenter Server Appliance (vCSA) Installation:
As compared with the deployment of previous version of vCSA, vCSA 6.0 is different. Prior to vSphere 6.0, vCSA can be deployed using OVF template but It should be deployed using
ISO image in vCSA 6.0. You need to download .iso installer for the vCenter Server Appliance and Client Integration Plug-in. VMWARE VSHPHERE 6 ARCHITECTURE
Install the Client Integration plugin and double-click "
html" file in the software installer directory,which will allow access to the VMware Client Integration Plug-In and click Install or Upgrade to start the vCenter Server Appliance deployment wizard. You will be provided with various options during the deployment including the Deployment type of vCenter Server.
vCenter 6.0 Deployment Methods:
Embedded Platform Services Controller:
All services bundled with the
Platform Services Controller are deployed on the same virtual machine as vCenter Server. vCenter Server with an embedded Platform Services Controller is suitable for smaller environments with eight or less product instances.
External Platform Services Controller:
The services bundled with the
Platform Services Controller and vCenter Server are deployed on different virtual machines.You must deploy the VMware Platform Services Controller first on one virtual appliance and then deploy vCenter Server on another appliance. The Platform Services Controller can be shared across many products. This configuration is suitable for larger environments with nine or more product instances. VMWARE VSHPHERE 6 ARCHITECTURE
vCSA 6.0 Appliance Access:
As Compared with the Previous Versions of vCSA, vCSA 6.0 appliance access has been modified a bit. vCSA no more having admin URL with port 5480 to control and configure the vCenter Server appliance. Now there are 3 Methods to access the vCSA appliance
vSphere Web Client UI
Appliance Shell
Direct Control User Interface (DCUI)
With DCUI added with vCSA. Look and Feel of vCSA is very similar to ESXi host. Black box model.
vCSA 6.0 Appliance Sizing:
During vCSA 6.0 deployment, you will be asked to select the deployment size of vCSA appliance. There are 4 default out-of-box sizes available with vCSA deployment.
VMWARE VSHPHERE 6 ARCHITECTURE
Comparison between vCenter 6.0 Windows and vCSA 6.0
vCSA now supports most of the features which is supported with the windows version of vCenter server. I would like to provide quick comparison table between vCenter windows version and vCenter server appliance with embedded database.
VMWARE VSHPHERE 6 ARCHITECTURE
vSphere 6.0 New Features – What is VMware Virtual Volumes (VVols)?
Virtual Volumes (VVols)
is one the new feature addition with vSphere 6.0. Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives. Virtual volumes are stored natively inside a storage system that is connected through Ethernet or SAN. They are exported as objects by a compliant storage system and are managed entirely by hardware on the storage side. Typically, a unique GUID identifies a virtual volume.
Virtual volumes are
not preprovisioned, but created automatically when you perform virtual machine management operations. These operations include a VM creation, cloning, and snapshotting. ESXi and vCenter Server associate one or more virtual volumes to a virtual machine.
Currently all storage is LUN-centric or volume-centric, especially when it comes to snapshots, clones and replication. VVols makes it storage VM-centric. With
VVols, most of the data operations can be offloaded to the storage arrays. VVols goes much further and makes storage arrays aware of individual VMDK files.Virtual volumes encapsulate virtual disks and other virtual machine files as natively stored the files on the storage system.
How Many Virtual Volumes (VVols) created Per Virtual Machine ?
For every VM a single VVol is created to replace the VM directory in today’s system.
1 config VVol represents a small directory that contains metadata files for a virtual machine. The files include a .vmx file, descriptor files for virtual disks, log files, and so forth.
1 VVol for every virtual disk (.VMDK)
1 VVol for swap, if needed
1 VVol per disk snapshot and 1 per memory snapshot
VMWARE VSHPHERE 6 ARCHITECTURE
Additional virtual volumes can be created for other virtual machine components and virtual disk derivatives, such as clones, snapshots, and replicas.
Major Components of VMware Virtual Volumes (VVols):
There are 3 important objects in particular related to Virtual Volumes(VVols) are the storage provider, the protocol endpoint and the storage container. Let’s discuss about each of the 3 items:
Storage Providers:
A VVols storage provider, also called a VASA provider. Storage provider is implemented through VMware APIs for Storage Awareness (VASA) and is used to manage all aspects of VVols storage.
Storage provider delivers information from the underlying storage,so that storage container capabilities can appear in vCenter Server and the vSphere Web Client.
Vendors are responsible for supplying storage providers that can integrate with vSphere and provide support to VVols.
Storage Container:
VVols uses a storage container, which is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes.
The storage container logically groups virtual volumes based on management and administrative needs. For example, the storage container can contain all virtual volumes created for a tenant in a
VMWARE VSHPHERE 6 ARCHITECTURE
multitenant deployment, or a department in an enterprise deployment. Each storage container serves as a virtual volume store and virtual volumes are allocated out of the storage container capacity.
Storage administrator on the storage side defines storage containers. The number of storage containers and their capacity depend on a vendor-specific implementation, but at least one container for each storage system is required.
Protocol EndPoint (PE):
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.
Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage system requires a very small number of protocol endpoints. A single protocol endpoint can connect to hundreds or thousands of virtual volumes.
VVols Datastore:
A VVols datastore represents a storage container in vCenter Server and the vSphere Web Client.
After vCenter Server discovers storage containers exported by storage systems, you must mount them to be able to use them. You use the datastore creation wizard in the vSphere Web Client to map a storage container to a VVols datastore.
VMWARE VSHPHERE 6 ARCHITECTURE
The VVols datastore that you create corresponds directly to the specific storage container and becomes the container’s representation in vCenter Server and the vSphere Web Client.
From a vSphere administrator prospective, the VVols datastore is similar to any other datastore and is used to hold virtual machines. Like other datastores, the VVols datastore can be browsed and lists configuration virtual volumes by virtual machine name. Like traditional datastores, the VVols datastore supports unmounting and mounting. However, such operations as upgrade and resize are not applicable to theVVols datastore. The VVols datastore capacity is configurable by the storage administrator outside of vSphere.
How vCenter Assigns MAC Addresses to VMware Virtual Machines?
I have been asked by many VMware Administrators about how MAC addresses are assigned to Virtual Machine?. We all aware that first 3 octet will be 00:50:56. The first three parts never change. This is the VMware Organizational Unique Identifier (OUI). How does other 3 octets are generated may be biggest question in our mind. Let’s discuss about How MAC addresses are assigned to VMware Virtual Machines by vCenter server. This post
only applicable to the VM MAC generation, in which ESXi host is managed by vCenter Server. ESXi host which is not managed by vCenter server will have different mechanism to generate the MAC address for Virtual Machine.
As we aware that, First 3 Octects will be 00:50:56. This is the VMware Organizational Unique Identifier (OUI). How does 4th octet of VM MAC address are calculated. Let’s begin the Calculation.
4th Octet of MAC = (
128+ vCenter Instance ID) Convert it to Hexadecimal
To get the vCenter Server Instance ID ->
Login to vSphere Client ->Administration -> vCenter Server Settings -> Runtime Settings. Note down the vCenter Server Unique ID. My vCenter Server Unique ID is 24. VMWARE VSHPHERE 6 ARCHITECTURE
How 4th Octet of the VM MAC Address is Calculated?
The automatically generated MAC address has a fourth octet is equal to 128 + the vCenter instance ID converted to hexadecimal.
4th Octet of MAC = (
128+ vCenter Instance ID) Convert it to Hexadecimal
= 128+24 = 152
4th Octet of VM MAC
= 98 (Conversion of 152 to Hexadecimal)
I have confirmed the Same from the few of Virtual Machine MAC Address. 4 octet is assigned as "98″
The last two bytes are assigned in mechanism, so that each MAC address is assigned would be unique
VMWARE VSHPHERE 6 ARCHITECTURE
Comments
Post a Comment