Saturday, January 26, 2013

PowerCLI - Starting and stopping SSH

From time to time I need to go connect on the ESXi using SSH. I know it is not best practice, still I have to do it. And the most simple task transforms into a pain - starting and stopping SSH server on several ESXi hosts. The following small piece of PowerCLI does the job just fine:
  • starting ssh service
Get-Cluster CLS01 | Get-VMHost | Get-VMHostService | Where {$_.Key -eq "TSM-SSH"} | Start-VMHostService 
  • stopping ssh service
Get-Cluster CLS01 | Get-VMHost | Get-VMHostService | Where {$_.Key -eq "TSM-SSH"} | Stop-VMHostService -Confirm:$false
Since get-vmhostservice returns all services on ESXi host, you can use this to start/stop any service.

Wednesday, January 23, 2013

VMware DirectPath I/O - adding passthrough PCI devices to VM

VMware DirectPath I/O allows guest OS to directly access physical PCI and PCIe devices connected to host. There several things to check before proceeding to configuration:
  • the maximum of PCI devices that can be attached to a VM is 6
  • the VM hardware must be 7 or later
  • Intel VT-d or AMD IOMMU must be enabled
  • PCI devices are connected and marked as available 
In vSphere Client  go to Configuration - Advanced Settings (Hardware) and see if the the device is Active (a green icon). If there is no device displayed, go to Edit and select your device. In some cases the host will need reboot - device will have an orange icon.

In /etc/vmware/esx.conf the modification is recorded as:
/device/000:000:27.0/owner = "passthru"

Go to VM - Edit Settings and add the device
A reservation equal to the memory of the VM will be automatically created. However, the reservation is not removed when the device is removed from the VM. So, be sure to clean up. If there is no memory reservation then powering on the VM will end in the following error:


Another way to do add/remove passthrough devices is using PowerCLI. The  add-passthroughdevice cmdlet does not create the memory reservation and it has to be done in a second step:
get-vmhost HostName | get-passthroughdevice | Where {$_.State -eq "Active"} | add-PassThroughDevice -VM VMName
foreach ($vm in (get-vm VMName)) {get-vmresourceconfiguration $vm | set-vmresourceconfiguration -MemReservationMB $vm.MemoryMB}


Removal of PCI device and memory reservation cleanup:
get-vm VMName | get-passthroughdevice | remove-passthroughdevice -Confirm:$false
get-vm VMName |get-vmresourceconfiguration | set-vmresourceconfiguration -MemReservationMB 0


A VM using DirectPath I/O does not support the following features:
  • snapshots
  • suspend and resume
  • HA
  • FT
  • DRS (the VM can be part of DRS cluster, but it cannot be migrated across hosts)
  • hot adding and removal of devices

Saturday, January 19, 2013

vShield Edge Gateway "IP Masquerading" in vCD 5.1

vCloud Director 5.1 comes with some changes from 1.5, in the sense that IP masquerading setting was removed and there is no default rule on the firewall. Since at office I work on 1.5 and since there is a glitch in the way NAT is implemented, it took me a bit of troubleshooting to figure it out.

My problem was simple - pass traffic out of the organization from a VM 192.168.20.100 to an external server at 192.168.1.200:
This is done in 3 steps:
  • sub allocate the IP pool on the external network
  • configure NAT rules
  • configure firewall rules
First thing to do is sub-allocate external network IP pool. Go to vCD GUI, Edge Gateway, select the gateway, Properties menu - Sub Allocate IP Pools tab. Choose the external network and sub-allocate the Pool:

Second, configure NAT rules. Go to Edge Gateway, select the gateway, Edge Gateway services menu, NAT tab, Add SNAT. In the rule select the external interface - the one connecting to the external networks, fill in IP address or subnet of the source VMs and choose as destination IP one of the external IPs from the sub-allocated pool:

Third step is to configure the firewall rules (remember, no default rules in 5.1). Go to Firewall tab and add the rule. I have also added an incoming rule to make the Edge Gateway respond to ping.


Finish the configuration, go to your VM and test the connectivity. You may read about the changes in the following VMware KB.


However, if the test does not work, you can do a bit of troubleshooting: go to vSphere Client, open a console to Edge Gateway, enter admin/default credentials and use the following debug command:
debug packet display interface vNic_0 host_192.168.1.200

vNic_0 being the external interface and 192.168.1.200 the destination host, you should see echo requests from 192.168.1.61 to 192.168.1.200. If, by any chance, you see the original IP address not being NAT-ed, then try a restart of the Edge Gateway. And please let me know if you see such behavior.

Thursday, January 17, 2013

Linux - adding a user to multiple servers

When I am not working on VMWare infrastructure, I face administration tasks at OS level. Since I am writing the scripts, I will try to post some on the blog from time to time. Following, a simple way of adding a new user to multiple servers (I am not using dsh, rather a for loop):

for i in `cat server_list`; do ssh $i  "echo 'john:johnpass:30000:30000::/home/john:/bin/bash' | /usr/sbin/newusers;
sed -i '/AllowUsers/s/root/root\ john/' /etc/ssh/sshd_config;
sed -i '/AllowGroups/s/root/root\ john/' /etc/ssh/sshd_config;
service sshd reload;
sed -i -e '/root\tALL=(ALL)/a john\tALL=(ALL)\tALL' /etc/sudoers;
"; done

It adds user information (username, password,UID,GID,home,shell) using newuser command, then it adds user and group in sshd_config (if it is the case). The last sed adds the new user to sudoers file right after it matches a string with tab (\t) in it.

Wednesday, January 16, 2013

Adding a new portgroup to vSwitch using PowerCLI

This time I will present a very short powercli that I use to mass change ESXi hosts vSwitches. First, it retrieves a specific cluster, then all hosts in the cluster and for each host it starts configuration. There are 2 vSwitches on each host. Next, on each vSwitch it creates a new portgroup, it retrieves teaming policy, disables inherit failover and reverses the active and stanby vmnics. 

get-cluster CLTEST01 | get-vmhost | foreach {
get-virtualswitch -VMHost $_ -Name vSwitch0 | New-VirtualPortGroup -Name pg_QLF_A_91 -VLanID 91 | Get-NicTeamingPolicy | Set-NicTeamingPolicy -InheritFailoverOrder:$false | Set-NicTeamingPolicy -MakeNicActive vmnic1 -MakeNicStandby vmnic0
get-virtualswitch -VMHost $_ -Name vSwitch1 | New-VirtualPortGroup -Name pg_QLF_I_92 -VLanID 92 | Get-NicTeamingPolicy | Set-NicTeamingPolicy -InheritFailoverOrder:$false | Set-NicTeamingPolicy -MakeNicActive vmnic3 -MakeNicStandby vmnic2
}

What I really enjoyed is the simplicity of PowerCLI.

Friday, January 11, 2013

Installing vCloud Director on CentOS and MS SQL Express - part 3

This is part 3 of a 3 part post that presents installation of vCloud Director 5.1 on CentOS 6.3 and MySQL Server 2012 Express
  • part 1 presents Configuration of MS SQL 2012 Express Database for vCloud Director installation
  • part 2 presents Configuration of CentOS 6.3 for vCloud Director
  • part 3 presents Installation of vCloud Director 5.1


Installation of vCloud Director 5.1

Download the bin package from VMware site (vmware-vcloud-director-5.1.1-868405.bin) and transfer it on the server. Change permissions on the file and run it. Answer "yes" the when Linux distribution is checked and stop the installation at the second question.
[root@vcd5101 ~]# chmod u+x vmware-vcloud-director-5.1.1-868405.bin
[root@vcd5101 ~]# ./vmware-vcloud-director-5.1.1-868405.bin
Checking architecture...done
Checking for a supported Linux distribution...
You are not running a Linux distribution supported by vCloud Director.
Would you like to proceed anyway? [y/n] y
….
Would you like to run the script now? (y/n)? n

We stop the installation because VMware KB 1026309 states that the keytool to be used when generating SSL certificates is the one shipped by VMware which can be found at /opt/vmware/vcloud-director/jre/bin/keytool. It is time to generate SSL certificates (one for http and one for console proxy). 
[root@vcd5101 ~]# mkdir /opt/vmware/vcloud-director/certs
[root@vcd5101 ~]# /opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass password -genkey -keyalg RSA -validity 365 -alias http

[root@vcd5101 ~]# /opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass password -genkey -keyalg RSA -validity 365 -alias consoleproxy

[root@vcd5101 ~]# chown -R vcloud.vcloud /opt/vmware/vcloud-director/certs/
Setup transfer server storage /opt/vmware/vcloud-director/data/transfer (NFS share) - not mandatory for single cell installation, but all bloggers recommended it.


[root@vcd5101 ~]# vi /etc/fstab
192.168.X.X:/mnt/vol1-nfs /opt/vmware/vcloud-director/data/transfer nfs rsize=8192,wsize=8192,intr 0 0
[root@vcd5101 ~]# mount -a -t nfs
Now the installation can continue by running  /opt/vmware/vcloud-director/bin/configure. Choose the IP addres for HTTP, Console and enter path to certificate store. Add Syslog server IP (I am using rsyslog instance on vCD Cell).

Enter information for DB connectivity
 

After DB configuration is finalized, start services

Configure Sysprep

Create sysprep packages structure for Windows Server 2000, 2003 and Windows XP transfer sysprep files in it.
[root@vcd5101 ~]# mkdir vcloud-sysprep
[root@vcd5101 ~]# cd vcloud-sysprep
[root@vcd5101 vcloud-sysprep]# mkdir win2000 win2k3 win2k3_64 winxp winxp_64

After sysprep files have been transfered in the strucutre for each guest OS run the following command

[root@vcd5101 ~]# /opt/vmware/vcloud-director/deploymentPackageCreator/createSysprepPackage.sh /root/vcloud-sysprep/

Restart vcd service
[root@vcd5101 ~]# service vmware-vcd restart

Setup vShield Manager

Open VM console and login to vShield manager using admin/default. execute enable and enter default as password. Run setup command and configure networking. Login on HTTPS to the IP address configured previously and connect vShield Manger to vCenter Server. Finish the configuration by setting DNS, NTP and syslog server information.

Login to vShield Manager first configuration wizard and start configuration of vCloud Director cell.



Thursday, January 10, 2013

Installing vCloud Director on CentOS and MS SQL Express - part 2

This is part 2 of a 3 part post that presents installation of vCloud Director 5.1 on CentOS 6.3 and MySQL Server 2012 Express
  • part 1 presents Configuration of MS SQL 2012 Express Database for vCloud Director installation
  • part 2 presents Configuration of CentOS 6.3 for vCloud Director
  • part 3 presents Installation of vCloud Director 5.1

Configuration of CentOS 6.3 for vCloud Director

The media kit used for this installation is Cent OS 6.3 64 bit minimal release (CentOS-6.3-x86_64-minimal.iso). My first vCD install was on Cent OS Live CD release, on which I spent too much time to deactivate Network Manager, configure a simple IP alias, change the init level and so on. That`s why this one is Cent OS minimal.
Before starting any configuration check that you have access to a local/remote Cent OS repository and that you have downloaded Java Runtime Environment 32 bit minimal 1.6 update 10. It is a 32 bit JRE, as it is the only version supported by VMware. I have used version 1.6 update 38 ( jre-6u38-linux-i586-rpm.bin)downloaded from Oracle`s site.
Create the VM for CentOS: 1 vCPU, 2 GB RAM (VMware recommended size), 10 GB HDD thin provisioned, 1 Gigabit interface. vCloud Director requires minimum 2 IP addresses - one for portal and one for remote console. However, the addresses can be configured on the same interface in the same subnet (alias) - this is my case. If your network setup is different, then configure 2 interfaces.
Next, open the console of the VM, power on the VM, connect the cdrom to CentOS iso and start installing the OS. During the install modify the default partitioning in the following way: reduce lv_swap to 1024 GB and increase lv_root to maximum (almost 9GB). For the rest of the install, follow defaults.
After the system has been installed, it is time to start the preparation for vCD installation. First thing first... configure the network interfaces and DNS  (we need network connectivity in order to install other packages such as perl):
  • configure eht0 and eth0:0 interfaces
[root@vcd5101 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.X.X"
NETMASK="255.255.255.0"

[root@vcd5101 ~]# cp -p /etc/sysconfig/network-scripts/ifcfg-
eth0 /etc/sysconfig/network-scripts/ifcfg-eth0:0
[root@vcd5101 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0:0
DEVICE="eth0:0"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.X.X"
NETMASK="255.255.255.0"

  • configure default gateway and restart the network service
[root@vcd5101 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=vcd5101
GATEWAY=192.168.X.X
[root@vcd5101 ~]# service network restart

  • configure DNS (my lab domain is cr.vmlab)
[root@vcd5101 ~]# vi /etc/resolv.conf
nameserver 192.168.X.X
domain cr.vmlab
[root@vcd5101 ~]# vi /etc/hosts
192.168.X.X vcd5101 vcd5101.cr.vmlab
Test connectivity to both IP addresses, DNS servers and gateway. Next we need to prepare the system for VMware tools installation. Being a minimal Cent OS we have to install perl:
[root@vcd5101 ~]# yum install perl
Perl is needed for our next step, VMware tools installation. In vSphere client choose Install/upgrade tools, open the console, mount the cdrom and install the tools:
[root@vcd5101 ~]# mount /dev/cdrom /media/[root@vcd5101 ~]# tar -zxvf /media/VMwareTools-9.0.0-782409.tar.gz
[root@vcd5101 ~]# ./vmware-tools-distrib/vmware-install.pl
During the configuration, chose default values. At the end, unmount the cdrom:
[root@vcd5101 ~]# umount /media/
Since the version used is minimal, there are some dependencies to be fulfilled.  The list of packages can be found in the official VMware documentation - I got mine from vCloud Director course manual. Run the following command, while -y will take care of the long list of dependencies: 
[root@vcd5101 ~]# yum install -y alsa-lib libICE libSM libX11 libXau libXext libXi libXt libXtst redhat-lsb
Side thought - while deploying on Cent OS Live CD installation, the only missing package is redhat-lsb.
The last package needed is JRE. Transfer JRE rpm to the system and install it:
[root@vcd5101 ~]# yum install ld-linux.so.2
[root@vcd5101 ~]# chmod u+x jre-6u38-linux-i586-rpm.bin
[root@vcd5101 ~]#./jre-6u38-linux-i586-rpm.bin

Running the bin will extract the rpm and also install jre. When installation is finished with the message "Done" it is time to start vCloud Director installation - part 3.

UPDATE 11.01.2012 - Iptables
By default, CentOS comes with iptables enabled. It is a good practice to keep the firewall on. I have configured the following ports (based on VMware documentation). However, it is possible to need new rules and the post will be updated accordingly.
[root@vcd5101 ~]# vi /etc/sysconfig/iptables
# vCloud Director Ports
# vCloud HTTPS
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
# NFS
-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 920 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 920 -j ACCEPT
#ActiveMQ
-A INPUT -m state --state NEW -m tcp -p tcp --dport 61611 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 61616 -j ACCEPT
#Syslog
-A INPUT -m state --state NEW -m udp -p udp --dport 514 -j ACCEPT
[root@vcd5101 ~]# service iptables restart

Wednesday, January 9, 2013

Installing vCloud Director on CentOS and MS SQL Express - part 1

Before it all starts


In the past 3 months I have been involved in projects using vCloud Director. The necessity to have a test environment appeared naturally. So I started working in my home lab on deploying vCloud Director. I had the help of some other blogs (about which you will hear in the post below) but I have also done it different by using CentOS 6.3 and MS SQL 2012 Express (both not supported by VMware).

The following post presents installation of vCloud Director 5.1 on CentOS 6.3 and MS SQL 2012 Express and will have 3 parts:
  • part 1 presents Configuration of MS SQL 2012 Express Database for vCloud Director installation
  • part 2 presents Configuration of CentOS 6.3 for vCloud Director
  • part 3 presents Installation of vCloud Director 5.1
Before starting anything make a little IP planning and check that you have a local DNS server and make sure that all infrastructure can be resolved properly (hosts, vcenter server, databases, vcloud cells and so on). I am using an AD (since I`ll  be testing LDAP integration ) integrated with MS DNS server. Other prerequisites: do not forget to deploy the vShield Manager appliance - each vCenter Server needs to have a vShield Manager. We will talk a bit later about basic configuration of vShield Manager.

 

Configuration of MS SQL 2012 Express Database for vCloud Director installation


First create the VM: 1 vCPU, 3 GB RAM (2GB recommended by MS), 25GB HDD thin provisioned (thin provisioning on SSD works great). Install Windows Server 2008  and VMware Tools. Configure the server (IP, hostname), if AD exists, join the server to the domain.

Next, download MS SQL 2012 Express from Microsoft site, the version that includes management studio (SQLEXPRWT_x64_ENU). Installing the DB is pretty straight forward - make sure to choose mixed mode authentication and configure sa user password. If you miss this step, it can be done after the install ( SQL Server Management Studio - Server properties - Security). During the configuration choose an instance name (VCDDB for example).

After SQL server is installed, open SQL Server Management Studio connect to VCDDB instance with sa user and add a user for vcloud director - vcddbadmin (Security - Logins - New)


Now it is time to create the DB: Database - New - vcddb01 (you can give it any name). Change the owner to user vcddbadmin.



Sizing the DB: VMware documentation offers the following parameters: data file (mdf) size = 100MB, filegrowth= 10% and for log file (ldf) size = 1MB, filegrowth= 10%. In lab environment these should suffice.
A very interesting post about vCloud director database can be found on Erik Bussink blog. Based on that post , I have decided to make the following configuration on my DB:
  • data file size = 1024MB, growth = 512 MB, limit = 3072 MB
  • log file size = 128 , growth = 128 MB, limit = 1024MB 
The actual limits are determined by both SSD space and VM size in my lab environment.



On Options page from database properties the proper collation sequence will be configured to Latin1_General_CS_AS. Default values for recovery model and compatibility level will be left unchanged ( Simple, SQL Server 2012 respectively)



The last action on the database is to prepare the DB. Again, I have used the information from Erik Bussink. Open query editor and execute the script below:

USE [vcddb01]
GO
ALTER DATABASE [vcddb01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [vcddb01] SET ALLOW_SNAPSHOT_ISOLATION ON;
EXEC sp_addextendedproperty @name = N'ALLOW_SNAPSHOT_ISOLATION', @value = 'ON';
ALTER DATABASE [vcddb01] SET READ_COMMITTED_SNAPSHOT ON WITH NO_WAIT;
EXEC sp_addextendedproperty @name = N'READ_COMMITTED_SNAPSHOT', @value = 'ON';
ALTER DATABASE [vcddb01] SET MULTI_USER;
GO

Form the original script I have removed the first line ALTER DATABASE [vcddb01] SET RECOVERY SIMPLE since the DB has default recovery model set as simple. Check that the extended properties were added: database Properties - Extended Properties.

To finish the installation, a couple of more things have to be done. First, configure MSSQL server to listen on TCP/IP. Using Sql Server Configuration Manager go to SQL Network Configuration and enable TCP/IP. In Properties tab chose the IP address and set the following parameters:Active YES,  Enabled YES, TCP Port 1433,  TCP Dynamic Port 0.

Then, restart the service and check that the server is listening on port 1433 (netstat -van).

Finally, configure Windows Firewall to allow incoming connections on TCP 143 and test that communication with the server on TCP 1433 is ok from another VM using the command telnet ip_addr_db 1433.

Tuesday, January 1, 2013

My shiny and new white box

It`s the new year and it started with a resolution (or more like a hope - or both) - I am trying to post on the blog once a week.

This one is about the white box I have acquired before Christmas. The main purpose of the white box is to replace the old one which had only 8 GB of RAM and was wasting a lot of my time. Since I could not do any upgrades to the old one I had to buy a completely new computer. It all evolved around the idea of being able to accommodate a vCloud Director deployment and after a bit of googling around I came up with the following:
- Intel Core i5-3470 - 4 cores @ 3.2 GHz, 77 W (my current Q9400 eats up 95W), no HT, but a lot of nice virtualization technologies: VT-x, VT-d,VT-x with EPT;
- since max RAM supported by the CPU is 32 GB then max it was: 4 x 8 GB Corsair Vengeance DDR3 dual channel @ 1600 MHz;
- motherboard ASUS P8Z77-V LX2 - Intel Z77 express chipset - integrated gigabit NIC and video graphics, SATA3, SATA2 and loaded with UEFI (my first UEFI home usage device);
- the case to hold it all is Antec NSK4482 with 380 W power source.

It all went up to around 550 EUR. The HDD are reused from the old box: 1 x Mushkin Callisto 60GB SSD , 1 x Intel 330 120GB SSD and 1 x Western Digital Caviar Black 640 GB SATA 2.

After moving the HDDs from the old computer to the new one, the only configuration I had to do was to select the management interface in ESXi DCUI. Since the new rig has only 3 fans (CPU, Case and PS), it is also very silent (the old one is a real noise making machine with 5 fans). I am very happy with the arrangement, but the only tests I have done are with the current infrastructure (1 x root ESXi, 2 x virtualized ESXi, VCSA, NetApp filer, VSC,  FreeNAS and AD). All VMs are kept on the Intel SSD. I will install vCD as soon as possible and see how it moves.

After a week, I got a glimpse at one of my friend`s white box, which uses Shuttle SH67H3 barebone. If you want a smaller foot print then go for the Shuttle. It also has an interesting CPU cooling system that uses the fan from the power source. So only 1 fan in the whole system. However it is 100 EUR more expensive and there is no room for 3 HDDs. 

I am confused about what to do with the old white box, so I had sacrificed a 2 GB USB stick, had installed ESXi and stuck it in the first USB port I saw. This way I have another ESXi host with 8 GB that could also be used.