Thursday, September 13, 2012

vSphere Networking

vSphere Networking Introduction:  
virtual machines access the network layer through virtual switches. These virtual switches are configured either independently on each ESXi host or through a centrally configured virtual switch.

The vSphere infrastructure provides two types of virtual networking architecture, the standard virtual switch architecture and the distributed virtual switch architecture.

Standard virtual switches manage virtual machines and networking at the host level.
Distributed virtual switch manages virtual machines and networking at the datacenter level.

VMware recommends that all networks be set up or migrated using the distributed virtual switch architecture, since it simplifies the datacenter by centralizing network configuration in addition to providing a more robust feature set.

The virtual environment provides similar networking elements as the physical world, such as virtual network interface cards, vSphere Distributed Switches (VDS), distributed port groups, vSphere Standard Switches (VSS), and port groups.

Like a physical machine, each virtual machine has its own virtual NIC called a vNIC and it has its own MAC address and one or more IP addresses and responds to the standard Ethernet protocol exactly like a physical NIC would. Nonetheless, an outside agent can determine that it is communicating with a virtual machine only if it checks the six byte vendor identifier in the MAC address.
With VSS, each host maintains its own virtual switch configuration while in a VDS, a single virtual switch configuration spans many hosts.

Each host server can have multiple standard virtual switches. You can create up to 127 virtual switches on each ESXi host. Each standard virtual switch has two sides to it. On one side of the virtual switch you have port groups. Port groups connect virtual machines to the standard virtual switch.
 On the other side of the standard virtual switch you have what are known as uplink ports. Uplink ports connect the standard virtual switch to physical Ethernet adapters which resides on the host. In turn, these physical Ethernet adapters connect to physical switches leading to the outside world.

A port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies that govern the network connected to it. Instead of connecting to a particular port on standard virtual switch, a virtual machine connects its vNIC to a port group. All virtual machines that connect to the same port group belong to the same network inside the virtual environment.
Just like port groups that can be created to handle the virtual machine traffic, VMkernel connection type or VMkernel Port can also be created to provide network connectivity for the host and handling VMware vMotion, IP storage, and Fault Tolerance.

Moving a virtual machine from one host to another is called migration. Using vMotion you can migrate powered on virtual machines with no downtime.

The main drawback of a standard virtual switch is that every ESXi host should have separate vSwitches configured on it. That means that virtual local area networks or VLAN, security policies, and teaming policies have to be individually configured on each and every ESXi host. If a policy needs to change, the vSphere administrator must change that policy on every host. While vCenter Server does allow the administrator to centrally manage the ESXi hosts, the changes to standard virtual switches still have to be done on each ESXi host.

Another drawback is that when a virtual machine is migrated with VMware vMotion, the networking state of the virtual machine gets reset. This makes network monitoring and troubleshooting a more complex
task in a virtual environment.

Setting Standard Switch : Setting up a networking environment using standard virtual switches can be done using the Configuration tab in the Hosts and Clusters view in vSphere Client.

Port groups for virtual machine networking or networking services, such as vMotion and iSCSI networking are configured using Add Network Wizard.

New standard virtual switches can be created during the port group creation process, or you can connect your new port group to an already existing standard virtual switch.

A virtual machine is connected to a virtual network by assigning the virtual machine’s NIC to that network’s port group.

A VDS functions as a single switch across all associated hosts. This enables you to set network configurations that span across all member hosts, and allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts.

It is possible to migrate a group of virtual machines from standard virtual switches on the host to a distributed virtual switch.

Like a standard virtual switch, a distributed virtual switch is a layer 2 network mechanism for virtual machines. A distributed virtual switch can route traffic internally between virtual machines or link to an external network.

A distributed switch (dvSwitch) can forward traffic internally between virtual machines or link to an external network by connecting to physical Ethernet adapters, also known as uplink adapters. Each distributed switch can also have one or more distributed port groups assigned to it. The distributed port groups combine multiple ports under a common configuration and provide a stable anchor point for virtual machines connecting to labeled networks.

Distributed switches exist across two or more clustered ESXi hosts. vCenter owns the configuration of distributed virtual switches, and the configuration is consistent across all hosts. The uplink ports on the distributed virtual switch connects to uplink ports on hidden standard virtual switches. The hidden standard virtual switch uplink ports connect to physical NICs, which then connect to physical switches in the outside world.

Within a distributed virtual switch, the control and I/O planes are separate.
The control plane resides in and is owned by vCenter Server. The control plane is responsible for configuring distributed switches, distributed port groups, distributed ports, uplinks, and NIC teaming. The control plane also coordinates the migration of the ports and is responsible for the switch configuration.

The I/O Plane is implemented as a hidden standard virtual switch inside the VMkernel of each ESXi host. The I/O plane manages the actual I/O hardware on the host and is responsible for forwarding packets.

Network configuration at the datacenter level offers several advantages.
1.     Simplifies datacenter setup and administration by centralizing network configuration.
2.     Distributed ports migrate with their clients. So, when you migrate a virtual machine with vMotion, the distributed port statistics and policies move with the virtual machine, thus simplifying debugging and troubleshooting.
3.     Providing for customization and 3rd party development.

An example of a third-party switch that leverages the vNetwork APIs is the Cisco Nexus 1000V. Network administrators can use this solution in place of the distributed switch to extend vCenter Server to manage Cisco Nexus and Cisco Catalyst switches.

You can enable two types of network services in ESXi, VMkernel and virtual machines.
The first type connects VMkernel services, such as NFS, iSCSI, or VMware vSphere® vMotion® to the physical network.
The second type connects virtual machines to the physical network.

IPv6 support is configured at the host level, and it is disabled by default. To enable or disable IPv6 support through the vSphere client, you must adhere to certain steps.
You can also configure IPv6 support through the command line. In either case, you must reboot the host for the change to take effect. Enabling IPv6 on the host does not disable IPv4. IPv4 and IPv6 can co-exist without any problems.

After IPv6 is enabled you have the option to specify IPv4 or IPv6 addresses.

There are three ways to assign an IPv6 address to an adapter. The first way is automatically, using DHCP. The second way is also automatically, but using the IPv6 stateless auto-configuration. This option automatically generates a Link-Local IP address assigned to communicate with potential routers in the same link, for example through advertisement. The third way is by entering static IPv6 addresses.

You also have the option to set a unique default gateway.

Private VLANs support broader compatibility with existing networking environments using the private VLAN technology. Private VLANs enable users to restrict communication between virtual machines on the same VLAN or network segment, significantly reducing the number of subnets required for certain network configurations.
PVLANs are useful on a DMZ where the server needs to be available to external connections and possibly internal connections, but rarely needs to communicate with the other servers on the DMZ.

The basic concept behind PVLANs is to divide an existing VLAN, now referred to as the primary PVLAN, into one or more segments. These segments are called secondary PVLANs. A PVLAN is identified by its primary PVLAN ID. A primary PVLAN ID can have multiple secondary PVLAN IDs associated with it.

Primary PVLANs are promiscuous, so virtual machines on a promiscuous PVLAN are reachable by and can reach any node in the same promiscuous PVLAN, as well as any node in the primary PVLAN. Ports on secondary PVLANs can be configured as either isolated or community. Virtual machines on isolated ports communicate only with virtual machines on promiscuous ports, whereas virtual machines on community ports communicate with both promiscuous ports and other ports on the same secondary PVLAN.

Load balancing and failover policies can be controlled at either the standard virtual switch level or at the port group level on a distributed virtual switch and can be set in the vSphere Client.
In some cases, multiple heavy loaded virtual machines are connected to the same pNIC and the load across the pNICs is not balanced.

Different load balancing policies include route based on the originating port ID and route based on IP or MAC hash.
Port ID-based assignments use fixed assignments.
Route based on IP hash chooses an uplink based on a hash of the source and destination IP addresses of each packet. Evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations.
MAC hash an uplink is selected based on the hash from the source Ethernet adapter.

Failover : The failover policies that can be set are network failure detection, notify switches, failback, and failover order.

Network failover detection specifies the method to use for failover detection. The policy can be set to either the Link Status only option or the Beacon Probing option within the vSphere Client.
Beaconing option detects many failures that are not detected by Link Status only alone.

The notify switches can be set to either Yes or No. If you select Yes, whenever a virtual Ethernet adapter is connected to the vSwitch or dvSwitch or whenever that virtual Ethernet adapter’s traffic is routed over a different physical Ethernet adapter in the team due to a failover event, a notification is sent out over the network to update the lookup tables on physical switches. In almost all cases, this is desirable for the lowest latency when a failover occurs.

NIC teaming applies a failback policy.

 Failover Order policy setting to specify how to distribute the work load for the physical Ethernet adapters on the host. You can place some adapters in active use, designate a second group as standby adapters for use in failover situations, and designate other adapters as unused, excluding them from NIC Teaming.


Teaming : Although teaming can be configured on a standard virtual switch, load-based teaming is only available with distributed virtual switches.

Network I/O Control :  for optimum utilization of 10 GigE link, there has to be a way to prioritize the network traffic by traffic flows. Network I/O control provides control to converge different kinds of traffic flows on a single pipe. It provides control to the administrator to ensure predictable network performance when multiple traffic types are flowing in the same pipe.
Network I/O control provides its users with different features. These include isolation, shares, and limits.

When network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools: VMware Fault Tolerance traffic, iSCSI traffic, management traffic, NFS traffic, virtual machine traffic, vMotion traffic vSphere Replication or VR traffic.

Shares allow a flexible networking capacity partitioning to help users in dealing with over commitment when flows compete aggressively for the same resources. Network I/O control uses shares to specify the relative importance of traffic flows.

Limits specify an absolute bandwidth for a traffic flow. Traffic from a given flow is never allowed to exceed its limit. The limit is specified in megabits per second. Limits are useful when you do not want to have the other traffic affected too much by other network events.

QoS Tagging: In the Network Resource Pool Settings, you can configure QoS priority. You can tag user-defined network resource pools with this tag. The QoS priority tag field allows you to select a priority code in the range of 1 to 7, where 1 is the lowest priority and 7 is the highest priority. Available only with distributed switches.

vSphere 5.0 provides several improvements to the functionality of distributed switches. These include NetFlow, Cisco discovery protocol or CDP and link layer discovery protocol or LLDP, and port mirroring

NetFlow is a common tool for analyzing network traffic. It is a specification for collecting types of network data for monitoring and reporting. NetFlow has multiple uses, including network monitoring and profiling, billing, intrusion detection and prevention, networking forensics, and Sarbanes-Oxley compliance.

Port mirroring is a technology that duplicates network packets of a switch port (the source) to another port (mirrored port). The source’s network traffic can monitored at the mirrored ,it assist in troubleshooting. Mirrored traffic can also be used as an input for security and other network analysis appliances.
Port mirroring overcomes the limitations of promiscuous mode by allowing the administrator to control which traffic on the distributed virtual switch can be seen by the port enabled for promiscuous mode.

Switch Discovery Protocol: CDP is available for both standard switches and distributed switches. LLDP is only available for distributed switches.

ESXi Firewall: The ESXi 5.0 management interface is protected by a new service-oriented and stateless firewall. It is enabled by default and at installation time, it is configured to block incoming/outgoing traffic, except for default services such as DNS Client, DHCP Client, and SNMP Server.

VMware vShield is a suite of security virtual appliances and APIs that are built to work with vSphere, protecting virtualized datacenters from attacks and misuse.

The vShield suite includes vShield Zones, vShield Edge, vShield App, and vShield Endpoint.

vShield Zones provide firewall protection for traffic between virtual machines.

vShield Edge provides network edge security and gateway services to isolate the virtual machines in a    port group, distributed port group, or Cisco Nexus 1000V.

vShield App is an interior, virtual-NIC-level firewall that allows you to create access control policies regardless of network topology.

Shield Endpoint delivers an introspection-based antivirus solution. vShield Endpoint uses the    hypervisor to scan guest virtual machines from the outside without an agent. vShield Endpoint avoids resource bottlenecks while optimizing  memory use.

References and further reading
http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-networking-guide.pdf
http://www.vmware.com/resources/techresources/10194
http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf

No comments: