Wednesday, December 10, 2014

Allowing SSH to ESXi Servers with public/private key authentication


If you have a large number of ESXi hosts that you need to SSH to and they have various passwords and so on, (this is not super secure, so do at your own security assessment)

Just like you can do this on a Unix host, you can do the same for ESXi:

1.  Generate a Public/Private key on the linux host:

cd ~/.ssh
ssh-keygen -t rsa

This will create two files in ~/.ssh: id_rsa and id_rsa.pub.

In ESX 5.X,  the location of authorized_keys is: /etc/ssh/keys-<username>/authorized_keys

So you can do this:

scp /root/.ssh/id_rsa.pub remote-ESXi-host:/etc/ssh/keys-root/authorized_keys

Like this for example:

scp /root/.ssh/id_rsa.pub 192.168.3.102:/etc/ssh/keys-root/authorized_keys

Of course if you want to do this from more than one host, then just add to the authorized_keys file rather than overwriting it....


Tuesday, December 9, 2014

VMware NFS datastores inactive (unmounted) after reboot

This comes up once in a while, you reboot a server, or a storage, and the datastores that are NFS mounted don't come up.  (In this case, I used update manager to patch some ESXi hosts, and it happened.

This is what it looks like:


The resolution is quite simple, you *could* just unmount the NFS stores and remount them, however that can take time, the easy way is to SSH to the host, and issue this command:

esxcfg-nas -r





that's it!


Thursday, October 2, 2014

Guide to Cloning/backing up ESXi Servers

This has been a major headache, and it wasnt really fully documented, so I thought it's better I post it for others if they run into a similar project:

I was looking through an instructions page, which was over 10 pages long most of which was configuring a very complicated Standard Switch.  The decision was made to "clone" the ESXi host,
thereby copying the network configuration and other variables on the server.  This is on a ESXi 5.5 Update 2, however it applies all the way back to 4.0 I think.  Make sure you have the exact same version of ESXi by doing this on the command line, or the other methods:

~ # vmware -vl



Ok, now that you make sure you have the same version and build on both ESXi hosts, let's move on:

VMware has two/three tools to do a backup of an ESXi hosts, or as in my case below, to clone it:

1.)  PowerCLI:

To Create a backup:
Get-VMHostFirmware -VMHost $host -BackupConfiguration -DestinationPath C:\HostBackups

To Restore that backup:
Set-VMHostFirmware -VMHost $Host -Restore -SourcePath c:\Hostbackups\backupfile.tgz -HostUser user -HostPassword password

Another important point is that the ESXi versions have to be EXACTLY the same, otherwise you will not really get an error other than:  

    + FullyQualifiedErrorId : Client20_SystemManagementServiceImpl_RestoreVmHo
   stFirmware_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.Host.SetVM

  HostFirmware


I had to dig in the logs (/var/log/vmware/hostd.log) to get this error below, which shows that there's a ESXi version mismatch:

2014-10-01T08:57:01.770Z [606C2B70 info 'Hostsvc.FirmwareSystem' opID=hostd-cee0 user=root] RestoreConfiguration failed with status 1. Output : Mismatched Bundle: Host release level: VMware ESXi 5.5.0 Update 1 Bundle release level: VMware ESXi 5.5.0 Update 2
-->

* Note: this tar archive is also used in method #3 to restore.  
Method #2 creates a binary file which is not at all like the tar archives.  

2.)  Command line on the VMware Management Appliance (vMA)

to create a Backup:  

vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -s /tmp/ESXi_test1_backup.txt
(the -s flag is to save it)

This will create a BINARY file in /tmp/ even though we called it *.txt.  You can look at this file if you want in VIM, by doing this through VIM or any HEX editor:

vim /tmp/ESXi_test1_backup.txt
in VIM,  type :%!xxd to turn it into a hexeditor
:%!xxd -r to go back to normal mode
xxd is present in any vim installation. 

to restore that backup: 
vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -l=/tmp/ESXi_test1_backup.txt -f


that last "f" in there, is to force it, as if you are restoring it to a different hardware as I was, the backup will not proceed.


3.)  On the ESXi shell


Put the host into maintenance mode by running the command:

      vim-cmd hostsvc/maintenance_mode_enter

Copy the backup configuration file to a location on the host and run the command:

      vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz

(this is if you copied the configBundle.tgz file that you created with PowerCLI to this host /tmp directory) 


Ok, after we did all this, you now have a clone of the ESXi server that you made the backup for.  This is ok if you have a failed server, and you get a new server and restore that configuration.  

However, if you now have 2 servers with this same configuration, besides the IP conflict, you will also have duplicate MAC addresses on all the interfaces.  


If you look at the /etc/vmware/esx.conf file, ESXi maps all the hard-coded MAC addresses to virtual ones, and so your clone although having different actual MAC addresses, will have the same VIRTUAL MAC addresses:


once you clone, you will have EXACTLY the same file on the clone.  So you need to shut down the original machine and change these values.  You can do this by deleting all the lines circled in red, and you can change the IP whereever it appears in that file (3 places)

Next step, reboot the server, and you now have a clone of the original server, which brings me to another point:

THIS WILL COPY OVER THE VMWARE LICENSE!!!

You HAVE To have an actual license on the destination server in order for this to work.  the Free version or trial version won't work.  If you are using free ESXi, the remote commands are only available for "read-only" operations. For more details, please refer to this article here.

This is what you will get if you try it:


What you CAN do if you want, is put in a license, and then change it after the box is done, this is the case for me, as these boxes by design use the ESXi free version.  



Friday, September 5, 2014

How to add multiple users to a Windows 2008 R2 Active Directory domain

This is just a quick post, I needed to add about 50 users to an AD OU, and needed a way to do it quickly, and ran into many posts with conflicting scripts which didnt work.  Finally I created one myself that's simple enough.

You need 2 files, the PowerShell script, and the CSV file, which I will have links for at the bottom.

Put the two scripts in C:\Users\Administrator and open up PowerShell, and drag the script into it.


The script will load the AD Module ("Import-Module activedirectory" )

You just have to change the CSV file to match your settings.

Also, make  sure "Active Directory Web Services" service is started, open up services.msc from Start-->Run, and start the service if it doesnt start, otherwise you'll get an error like:

“Error initializing default drive: ‘Unable to find a default server with Active Directory Web Services running”




Wednesday, July 9, 2014

How to modify templates that do not work with the VMware customization wizard

Some RedHat templates, do not work with the Vmware customization wizard.  For example everything under RHEL 5.6, you will get an error like this:




Clone virtual machine XXXXX-09 
Customization of the guest operating system 'rhel5_64Guest' is not supported in this configuration. Microsoft Vista (TM) and Linux guests with Logical Volume Manager are supported only for recent ESX host and VMware Tools versions. Refer to vCenter documentation for supported configurations.



So what you want to do now if you are creating 40 VM's or a number of them, is create them all from the template you're using, either by doing it through the GUI and choosing no customization such as below:

Or the way I do it is through PowerCLI using this line (or lines)

New-vm -vmhost host15.domain.com -Name VM-RHEL5.6-01 -Template template-rhel56-64bit -Datastore datastore-NetApp1

It would then look like this:


(in the example above, I put 7 of these lines in a powerCLI script)

After the VM's are created, I would put them into folders and then modify CPU/Memory as well as vlan if needed such as below:

CPU/MEMORY:
get-vm -Location "SOMEFOLDER" | set-vm -MemoryGB 4 -NumCpu 2 -confirm:$false 

Network (Portgroup)
get-vm -Location "SOMEFOLDER"  | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName "dvPort-vlan101-public-web" -confirm:$false 

Now lastly comes the Linux part, where since these templates have an interface which is configured for DHCP.  If you have a DHCP server and you can see the IP address, then you can just SSH to the VM and change what you need.  However in our case these do not get an IP and so we need to get on console and change things, as below:

Power the VM's on, then go to the console, log in and issue this command:

# system-config-network

You will then get a screen like this:


You then press enter, go into "Edit Devices" then the device you want, such as eth0, and you will get a screen like this:


You then press the space bar to remove the asterisk from the "Use DHCP" and put in the IP address,
after which you go out, saving the changes, and then issue a:

#service network restart

Afterwards, SSH in, or do it in the console, edit /etc/hosts, change the name of the VM, /etc/sysconfig/network, and /etc/resolv.conf and anything else you need to do.  (Change password, change ntp.conf etc)

You could also make all these changes to the file /etc/sysconfig/network-scripts/ifcfg-eth0 vs. doing it using the Redhat wizard shown above.



Monday, June 16, 2014

How to configure a Dell M1000e with 16 M620 Blades

So you got a new Dell M1000e Chassis and 16 M620/M630 blades.... how do you configure them all to be one large VMware farm? (or configure them with any OS you need)


Here are the steps:


1.  Prepare all your IP's for the iDRACs, as well as for the ESXi management IPs/VLANs.

2.  First off you will need to give the CMC an IP address, you can do this either via the LCD or by a console, in the case of a console, you do something like this:

first enable the NIC if it isnt:

racadm config -g cfgLanNetworking -o cfgNicVLanEnable 1

Then Set the IP:

setniccfg -s 172.25.66.10 255.255.255.0 172.25.66.1

If you want to set a VLAN, which I did in my case, you would do this:

racadm config -g cfgLanNetworking -o cfgNicVLanID 205

go and turn all the blades on, you can do this by now going to the CMC URL in this case:

http://172.25.66.10  (the default user/password for all Dell iDRAC's is root and calvin)

Then as you can see in the picture below, you can power them all on or off.































3.  Now we want to configure an IP for all of them, so you can either do it via the web GUI (SLOW) or via CLI (FAST)

First off, you need to enable the NIC, you can do this like so:

SSH to the CMC, then issue this:

racadm config -g cfgServerInfo -o cfgServerNicEnable 1 -i 1
racadm config -g cfgServerInfo -o cfgServerNicEnable 1 -i 2
racadm config -g cfgServerInfo -o cfgServerNicEnable 1 -i 3


This is what it would look like:



etc, all the way to #16, then you can create your IP list, and also copy/paste it as such:


racadm setniccfg -s  10.15.290.26 255.255.254.0  10.15.291.254 -m server-2
racadm setniccfg -s  10.15.290.27 255.255.254.0  10.15.291.254 -m server-3

This is what it would look like:



Now that you have all the blades with an iDRAC IP, you can now configure ESXi or anything else you want to, first go to the main page, and click on "Remote Console" that will open the console window. 




I had all 16 set up on my screen like this:


As you can see in the picture, I already installed ESXi , as this was done through a kickstart server (these servers did not come with the SD cards that have ESXi on them) 

The next part is the one that saves alot of time, which I will cover in the next post, is how to configure all these blades with Ansible 




Thursday, May 8, 2014

Resizing a disk on a Linux VM (LVM Based)

We needed to resize a VM that had initially a 25Gb Virtual Hard Drive, to a 40Gb HD.  These are the steps below of how to do it, and what it looks like.

Step 1: Resize the disk in VMware, either through vSphere, or PowerCLI if you'd like.  I happened to have vSphere open so I did it that way:

(note, you will have to turn off the VM in order to do this)




After the VM has come up, you now do the below, here is a shortened version of things, and below it the actual output you get:

----------------------------------------------------------------------- 
echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan

fdisk /dev/sda
d
2
n
p
2
502
ENTER
w

reboot

2.       After the server reboots, run these three command:


partx -a /dev/sda ;
pvresize /dev/sda2 ;
pvcreate /dev/sda3 ;

vgextend vg_root /dev/sda3 ;
lvextend -l +100%FREE /dev/mapper/vg_root-lv_root
resize2fs /dev/mapper/vg_root-lv_root

(to see what your VG is called run  #vgdisplay  | grep "VG Name" )

-----------------------------------------------------------------------

Now here it is in detail:

[root@node01 ~]# echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
[root@node01 ~]# echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan

[root@node01 ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/sda: 26.8 GB, 26843545600 bytes
64 heads, 32 sectors/track, 25600 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a646c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           2         501      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             502       16384    16264192   8e  Linux LVM
Partition 2 does not end on cylinder boundary.

Command (m for help): d
Partition number (1-4): 2

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1-25600, default 1): 502
Last cylinder, +cylinders or +size{K,M,G} (502-25600, default 25600):
Using default value 25600

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@node01 ~]#
[root@node01 ~]# reboot

Broadcast message from root@node01
        (/dev/pts/1) at 21:12 ...

The system is going down for reboot NOW!
[root@node01 ~]# Red Hat Enterprise Linux Server release 6.2 (Santiago)
Kernel 2.6.32-220.el6.x86_64 on an x86_64


[root@node01 ~]# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@node01 ~]# lvextend -l +100%FREE /dev/mapper/vg_root-lv_root
  Extending logical volume lv_root to 20.50 GiB
  Logical volume lv_root successfully resized
[root@node01 ~]# resize2fs /dev/mapper/vg_root-lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg_root-lv_root is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/mapper/vg_root-lv_root to 5373952 (4k) blocks.
The filesystem on /dev/mapper/vg_root-lv_root is now 5373952 blocks long.

[root@node01 ~]#
[root@node01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root
                       21G  5.0G   15G  27% /
tmpfs                  20G     0   20G   0% /dev/shm
/dev/sda1             485M   36M  424M   8% /boot


And that's all there is to it.  you now have a 40Gb drive instead of a 25Gb one.


Thursday, April 10, 2014

Clustering RedHat VM's (Virtual Machines) in ESXi 4.1, 5.1, 5.5

I had to cluster 2 VM's in ESXi 5.1, for a certain application, not your run-of-the-mill activity, but it happens.  Here are the steps below

First off, we install the VM's, then we need to set up the clustering.

You will need the following RPM's:
  • cluster-glue-libs-1.0.5-2.el6.x86_64.rpm
  • clusterlib-3.0.12.1-23.el6.x86_64.rpm
  • cman-3.0.12.1-23.el6.x86_64.rpm
  • corosync-1.4.1-4.el6.x86_64.rpm
  • corosynclib-1.4.1-4.el6.x86_64.rpm
  • fence-agents-3.1.5-10.el6.x86_64.rpm
  • fence-virtd-0.2.3-5.el6.x86_64.rpm
  • fence-virtd-libvirt-0.2.3-5.el6.x86_64.rpm
  • history.rpm
  • ipmitool-1.8.11-12.el6.x86_64.rpm
  • libibverbs-1.1.5-3.el6.x86_64.rpm
  • librdmacm-1.0.14.1-3.el6.x86_64.rpm
  • libtool-2.2.6-15.5.el6.x86_64.rpm
  • libtool-ltdl-2.2.6-15.5.el6.x86_64.rpm
  • libvirt-0.9.4-23.el6.x86_64.rpm
  • libvirt-client-0.9.4-23.el6.x86_64.rpm
  • luci-0.23.0-32.el6.x86_64.rpm
  • modcluster-0.16.2-14.el6.x86_64.rpm
  • nss-tools-3.12.10-16.el6.x86_64.rpm
  • oddjob-0.30-5.el6.x86_64.rpm
  • openais-1.1.1-7.el6.x86_64.rpm
  • openaislib-1.1.1-7.el6.x86_64.rpm
  • perl-Net-Telnet-3.03-11.el6.noarch.rpm
  • pexpect-2.3-6.el6.noarch.rpm
  • python-formencode-1.2.2-2.1.el6.noarch.rpm
  • python-paste-1.7.4-1.el6.noarch.rpm
  • python-repoze-who-1.0.13-2.el6.noarch.rpm
  • python-repoze-who-friendlyform-1.0-0.3.b3.el6.noarch.rpm
  • python-setuptools-0.6.10-3.el6.noarch.rpm
  • python-suds-0.4.1-3.el6.noarch.rpm
  • python-toscawidgets-0.9.8-1.el6.noarch.rpm
  • python-tw-forms-0.9.9-1.el6.noarch.rpm
  • python-webob-0.9.6.1-3.el6.noarch.rpm
  • python-zope-filesystem-1-5.el6.x86_64.rpm
  • python-zope-interface-3.5.2-2.1.el6.x86_64.rpm
  • resource-agents-3.9.2-7.el6.x86_64.rpm
  • rgmanager-3.0.12.1-5.el6.x86_64.rpm
  • ricci-0.16.2-43.el6.x86_64.rpm
  • samba-client-3.5.10-114.el6.x86_64.rpm
  • sg3_utils-1.28-4.el6.x86_64.rpm
  • TurboGears2-2.0.3-4.el6.noarch.rpm

I copied all these RPM's to a folder, and ran the commands from there. You will also need to install all the dependencies, which will depend on your system.  This particular VM was pretty stripped down, so I needed to do the following using yum, and the rest with straight RPM install. (You could write down all the dependencies yum installs, but I didn't)

yum install libvirt-client-0.9.4-23.el6.x86_64
yum install sg3_utils
yum install libvirt
yum install fence-agents
yum install fence-virt
yum install TurboGears2
yum install samba-3.5.10-114.el6.x86_64
yum install cifs-utils-4.8.1-5.el6.x86_64


rpm -ivh perl*
rpm -ivh python*
rpm -ivh   python-repoze-who-friendlyform-1.0-0.3.b3.el6.noarch.rpm
rpm -ivh python-tw-forms-0.9.9-1.el6.noarch.rpm
rpm -ivh fence-virtd-0.2.3-5.el6.x86_64.rpm 
rpm -ivh libibverbs-1.1.5-3.el6.x86_64.rpm
rpm -ivh librdmacm-1.0.14.1-3.el6.x86_64.rpm
rpm -ivh corosync*
rpm -ivh libtool-ltdl-2.2.6-15.5.el6.x86_64.rpm
rpm -ivh cluster*
rpm -ivh openais*
rpm -ivh pexpect-2.3-6.el6.noarch.rpm
rpm -ivh ricci-0.16.2-43.el6.x86_64.rpm
rpm -ivh oddjob-0.30-5.el6.x86_64.rpm
rpm -ivh nss-tools-3.12.10-16.el6.x86_64.rpm 
rpm -ivh oddjob-0.30-5.el6.x86_64.rpm
rpm -ivh ipmitool-1.8.11-12.el6.x86_64.rpm
rpm -ivh fence-virtd-libvirt-0.2.3-5.el6.x86_64.rpm
rpm -ivh libvirt-0.9.4-23.el6.x86_64.rpm
rpm -ivh modcluster-0.16.2-14.el6.x86_64.rpm
rpm -ivh ricci-0.16.2-43.el6.x86_64.rpm
rpm -ivh fence*
rpm -ivh cman-3.0.12.1-23.el6.x86_64.rpm
rpm -ivh luci-0.23.0-32.el6.x86_64.rpm 
rpm -ivh resource-agents-3.9.2-7.el6.x86_64.rpm
rpm -ivh rgmanager-3.0.12.1-5.el6.x86_64.rpm 

Now you want to add in a /etc/cluster/cluster.conf file, which would look like this:

<?xml version="1.0"?>
<cluster config_version="3" name="name-of-cluster">
        <clusternodes>
                <clusternode name="vm01" nodeid="1">
                        <fence>
                                <method name="VMWare_Vcenter_SOAP">
                                        <device name="vcenter" port="DATACENTER/FOLDER/VM01" ssl="on"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="vm02" nodeid="2">
                        <fence>
                                <method name="VMWare_Vcenter_SOAP">
                                        <device name="vcenter" port="DATACENTER/FOLDER/VM02" ssl="on"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" transport="udpu" two_node="1"/>
        <rm>
                <failoverdomains>
                        <failoverdomain name="failoverdomain-1" nofailback="1" ordered="0" restricted="0">
                                <failoverdomainnode name="VM01"/>
                                <failoverdomainnode name="VM02"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="10.10.18.137" monitor_link="on" sleeptime="2"/>
                </resources>
                <service domain="failoverdomain-1" name="HTTP_service" recovery="relocate">
                        <script file="/etc/init.d/httpd" name="httpd"/>
                        <ip ref="10.10.18.137"/>
                </service>
        </rm>
        <fencedevices>
                <fencedevice agent="fence_vmware_soap" ipaddr="10.1.9.10" login="svc.rhc.fencer.la1" name="vcenter" passwd="chooseagoodpassword"/>
        </fencedevices>
        <fence_daemon post_join_delay="20"/>
        <logging debug="on" syslog_priority="debug">
                <logging_daemon debug="on" name="corosync"/>
        </logging>
</cluster>

This configuration file will just monitor a simple httpd service, you can then put whatever service you actually need clustered.  

** Make sure all the DNS is working properly, and the /etc/hosts file is updated and the order is hosts,bind in /etc/host.conf.

Now let's start all the services and make sure they come on at boot time:

service cman start
service rgmanager start
service modclusterd start

chkconfig cman on
chkconfig rgmanager on                                                                                                      
chkconfig modclusterd on

Next we have to take care of a VMware user that only has rights to start/stop the VM's, that's the user/pass you are referencing in the cluster.conf file.

* Note - if you have VMware authenticating against an Active Directory server, you need a specific user there.  In VMware 4.x that can be any user, and you assign the roles as below.  However in Vmware 5.x which adds SSO (Single Sign On) the Active Directory user now has to have certain rights (which I will fill in later in the blog) otherwise it will not work.  

Next we want to create rules on the vCenter that both VM's dont end up on the same host, thereby killing our clustering efforts if a VMware host goes down.

So first we will get the cluster settings as below:



and now go to vSphere DRS, and to the rules under there:



I will call the rule whatever name I want, and choose the VM's that are affected by this rule, and choose what to do with them, in this case I choose to separate them. 

Your rules will now look like this:


In order to allow fence_vmware_soap to work, the configured vCenter user account needs to belong to a role with the following four permissions set in vSphere:
  • System.Anonymous
  • System.View
  • VirtualMachine.Interact.PowerOff
  • VirtualMachine.Interact.PowerOn
Next, we will create a role, that is only allowed to start, stop, reset and suspend the VM's:

1) Go to "Home" => "Administration" => "Roles" => "[vSphere server name]"
2) Right-click in the left frame and select "Add..."
3) Name this role in any way, e.g: "RHEL-HA fence_vmware_soap"
4) Under "Privileges", expand the tree "All Privileges" => "Virtual machine" => "Interaction"
5) Check both the "Power Off" and "Power On" boxes
6) Press the "OK" button
7) Then associate this role with the user/group they want running fence_vmware_soap.


Under there choose just the power on/off suspend/reset under Interaction and under Virtual Machine


Then, we will create a service account, let's call it svc.la1, this could be done locally in the VMware database, or in Active Directory or any other LDAP server you use to authenticate.

We will then look up that user, in this case the domain is "SS" and we look for that user, choose it, and click on "Add"


it will then look like this:


we will then test to see that it works:

[root@bc03 ~]# fence_vmware_soap --ip 10.10.10.10 --username "USER YOU CREATED" --password "PASSWORD" -z -U 4211547e-65df-2a65-7e17-d1e731187fdd --action="reboot"

Btw, you can get the UUID of the machine by using PowerShell like this:

PowerCLI> Get-VM "NAME_OF_VM" | %{(Get-View $_.Id).config.uuid}



if it doesn't work, it will say something like this:

[root@bc03 ~]# fence_vmware_soap --ip 10.10.10.10 --username "svc.rhc.local" --password "password" -z --action reboot -U "4211547e-65df-2a65-7e17-d1e731187fdd"                           
No handlers could be found for logger "suds.client"
Unable to connect/login to fencing device
[root@bc03 ~]# 

(I'm running this command from the other VM, BC03, as to fence the other node and not myself)

The first line is just a python message that say's that the script doesn't have logging capabilities, however the 2nd message is saying that there's an authentication problem.  You can confirm this by looking at the vCenter server logs, at C:\ProgramData\VMware\VMware VirtualCenter\Logs\  or by creating an "Admin" user on AD and see if it then works (see my note above)  





Thursday, March 13, 2014

Deploying multiple VM's using VMware PowerCLI




Today we will detail how to deploy multiple VM's ( I would say the minimum number you want to do this is at 10 VM's, however up to you)

The question is not to just deploy 10+ VM's of the same configurations, then you dont need this, you can do a simple one liner, or you could follow this post.

We are talking about deploying let's say 25 VM's, but from a certain template, and into a specific folder and each VM has a different hostname, memory allocation and CPU, as well as a static IP.

In this case, you need to create an excel file, which you'll save as CSV, with the following columns:

VMName
VMHost
Datastore
Template
Customization
IPAddress
Subnetmask
DefaultGateway
DNS

You could add some more columns in there if you wanted, for example for the memory and CPU.

Then you would need to install PowerCLI of course, I have another post talking about that briefly, 

You would then put this CSV file in the directory of your choice, in my case it's at C:\Scripts\
and modify this command below to your likings.  I highlighted the parts you need to change:


Import-Csv "C:\Scripts\NewVMsCR2.csv" -UseCulture | %{
    Get-OSCustomizationSpec $_.Customization | Get-OSCustomizationNicMapping | Set-OSCustomizationNicMapping -IpMode UseStaticIP -IpAddress $_.IPAddress -SubnetMask $_.Subnetmask -DefaultGateway $_.DefaultGateway
    $vm=New-VM -Name $_.VMName -Template $_.Template -Host $_.VMHost -Location "WEB SERVERS" -Datastore $_.Datastore -Confirm:$false -RunAsync -OSCustomizationSpec $_.Customization
}

You do need to have the template that's referenced in the excel sheet, as well as the customization profile, and in the case of the command above, also the folder "WEB SERVERS"  otherwise it will error out.

However once all is ready, you are now ready to deploy as many VM's as you need.

Here is a picture of it in action (some IP's changed for security reasons)


Wednesday, March 12, 2014

VMware PowerCLI tips and tricks



Instead of using the GUI for various vCenter/VMware tasks, it's much easier to download and install PowerCLI (you may want to do the following after installing it:

PowerCLI C:\>Set-ExecutionPolicy RemoteSigned

Also, if you're working on several vCenters like I do, you may want to change the window title to something like this:
$host.ui.RawUI.WindowTitle = "SET WINDOW TITLE HERE"



Here are some tasks you can easily do with the PowerCLI:

To find all the VM's with the name abcd1234 in their name:

1. Connect to the vCenter, open up PowerCLI and type:   Connect-VIServer <name of vCenter>
2. type  PowerCLI C:\> get-vm abcd1234*

You will see something like this


(sorry about the blackouts- this is from a production vCenter)

now let's say we wanted to power off all these VM's in order to change the memory to 7Gb of RAM and the CPU to 1 vCPU, we would do this:

1. PowerCLI C:\> get-vm abcd1234* | stop-vm -RunAsync -confirm:$false
2. PowerCLI C:\> get-vm abcd1234* | set-vm -MemoryGB 7 -NumCpu 1 -Notes "H/W modified $(Get-Date)" -confirm:$false


You may have a case where you have too many VM's show up, but your VM's are in a folder (as they should!)  you would then use:

PowerCLI C:\>get-vm -Location "foldermisc" | set-vm -MemoryGB 25 -NumCpu 1 -confirm:$false



If we wanted now to power all these VM's on, we would run:

PowerCLI C:\> get-vm abcd1234* | start-vm -RunAsync -confirm:$false

Another needed command is if you wanted to change a whole bunch of VM's to a different PortGroup (vlan) you would then do this for example:

Find out what the exact name of all the port groups are:

PowerCLI C:\> Get-VirtualPortGroup

Then set the vlan for the relevant VM's:

PowerCLI C:\> Get-VM pglap-* | Get-NetworkAdapter | Set-NetworkAdapter -N
etworkName dvPort-vlan1


If you wanted to avoid the confirmation of course, just add  -confirm:$false at the end, I forgot it above (!)

PowerCLI C:\> Get-VM pglap-* | Get-NetworkAdapter | Set-NetworkAdapter -N
etworkName dvPort-vlan1 -confirm:$false


Moving VM's from one datastore to another (Storage vMotion)

For example, I want to move all the VM's or just VM's with a certain name to another datastore.  I would use the same expressions as above, and do the following:

PowerCLI C:\> Get-VM -Location "NAME_OF_FOLDER" | Move-VM -DiskStorageFormat
Thin -Datastore "DATASTORE_2"

We have now moved all the VM's in "NAME_OF_FOLDER" (Obviously substitute that for the name of your actual folder, or do something like "get-vm abcd1234*")  to the new datastore "DATASTORE_2"

To check if they have in fact moved, we could run this command (this would show us all the VM's in vCenter, again you could do a get-vm just for the subset you want)

PowerCLI C:\>Get-VM | Select Name, @{N="Datastore";E={$_ | Get-Datastore}}