Thursday, December 1, 2011

Step by Step: Installing XenServer 6.0.0 from scratch

So today we will install a brand new installation of Citrix XenServer 6.0.0.

First of all, you will need to download the .ISO file, (or put in the CD if you have it)
Since I dont have it, I will download a trial and burn it to disk.

Step1: Download the file:


Step 2: Burn it to CD:


Step 3:Install XenCenter on your machine, this is the client piece, like vSphere client for VMware to connect to the Xen Server.

Inside the CD image we just burnt, there's a directory called \client_install that has a XenCenter.MSI in it:



and....


Step 4:  Now we will take this CD and put it in the server in which we want the XenServer hyperisor to reside on.

When you reach the pre-install menu, it will look like this:



Step 5: You will have a few questions during the install, which I wont cover here, but you can pretty much figure out by yourself, such as on which disk to install, keyboard layout, if to enable thin provisioning etc.  When it's done installing, and you reboot, you will get this screen:


Step 6: As you can see, we have the XenServer installed, now we will connect to it using the XenCenter  which we installed earlier.  You can find it under the Citrix Program Group:


You will now take the IP address (or hostname if you had set it up) and put it in the "add server" as in the below screenshot:


Put the password (which was defined in one of the steps during the installation in step 5) and you will get a screen like this one below:


As you can see, we have added this XenServer to our XenCenter, and can now add a virtual machine, storage or networking as needed. 

We will next do a step by step on adding a VM onto XenServer, stay tuned!

Friday, November 18, 2011

Creating Aggregates, Volumes and more on NetApp

By: Boaz Minitzer
Tag: NetApp ONTAP Release 7.2.4

 

A.   Create aggregates

Initially the filer must be set up with a system disk on the san, an aggregate will need to be created from a serial terminal session. For a brand new installation the setup program can be run as soon as a LUN has been exported to the filer. If the system is powered up at the boot prompt type “bye” to exit the boot prompt and begin the setup configuration. The section below is for creating additional non-system aggregates.
CLI
Example FILER2:
FILER2>disk show –n
*this will list all un-owned disks exported to the filer
FILER2>aggr create aggr1 –d sandir1:42.126L22 sandir1:42.126L23
*the two sandir disks above represent two 500GB lun exported to the filer, aggr1 is used because the system aggregate should not be used for data storage and is typically created as aggr0.

FilerView GUI
  1. Click ‘Aggregates’ in left sidebar, click Next.
  2. Type the aggregate name in the text box: aggr1 (2,3,4….and so on), click Next.
  3. Leave Raid Group size as default, click Next
  4. Click the radio button for Manual on the “Disk Selection Method” screen and click Next.
  5. Select Disk Type: LUN from the drop down menu, and click Next.
  6. On the Manual Disk Selection page, select the LUNs you have exported to the filer from the window on the right hand side of the page. Use shift+ctrl to select multiple LUNs, click Next.
  7. Click Next to return to the original browser window.
  8. Refresh your browser and the newly created LUN should appear.


B.   Create volumes


FilerView GUI:

  1. Click Volumes: Add,
  2. Click Next to start in popup window,
  3. Click Next (choose default of flexible volume), choose volume name, click Next
  4. Next, and select the aggregate you would like to create the volume on, and select default for space guar
  5. Click Usable size, and specify MB and snap reserve, and click next.
  6. Click Next.
  7. Go back to main FilerView window and click Volumes:Manage to refresh.  You should see the new volumes. 
  8. On the console, type df -A to show aggregate information.  The %used for the aggregate agg0 shouldn’t be more than 94% to avoid problems, per netapp support.
  9.  To set the language on the volume, run for each FILER1, FILER2:  vol lang en_US.  This is needed for each new volume if run from the command line, but not from FilerView.

Command Line:
Type vol create <vol-name> -l <language-code> <hosting-aggr-name> <size>[k|m|g|t]

FILER1>vol create test_vol –l lang en_US aggr0 500m


C.   Create Qtree(s)

Create qtree(s) to create a director(ies) where data can be managed.  A qtree will have to be created for each directory that needs to have a quota or mirrored.  Since migrating directories currently on EMC to netapp will give only a couple hundred qtrees at most, this allows plenty of room for growth until you reach the maximum of 4995 qtrees per volume.

FilerView GUI:

  1. Click Volumes: Manage: Qtrees: Add
  2. Click appropriate values, add.  Leave Oplocks checked.

Command Line:
Filer>  qtree create dir-name-here

D.   Set up NFS exports on volumes/qtrees


In FilerView GUI:
  1. Click NFS: Add Export
  2. Click appropriate path options (e.g. Read/write for /vol/hiwnp/app), then Next
  3. Add appropriate hosts to export to (e.g. caliwww1, caliwww2 for /vol/hiwnp/app) and click Next

If the export exists and you want to add permissions/hosts:
  1. Click NFS: Manage Exports.
  2. Select the export.
  3. If export options need to be updated, do so then click next.
  4. Leave the export path as it is, click next.
  5. Add read-only, read-write, or root hosts as select in export options, click next.
  6. Leave the security options as is, click next.
  7. Click Commit to finalize your changes.

Command line:
  1. In one terminal, open /net/<filer-name>/vol/vol0/etc/exports and select an example rule you’d like to copy.  Check for an existing rule for the qtree you want to export.
  2. run exportfs –io <desired-options-here-from-exports>  (will export without changing /net/<filer-name>/vol/vol0/etc/exports)

If access from client OK, then insert/edit line in /net/<filer-name>/vol/vol0/etc/exports with <desired-options-here-from-exports>
If you messed things up, run exportfs -r to revert back.

If the export exists and you want to add permissions/hosts:
  1. In one terminal, open /net/<filer-name>/vol/vol0/etc/exports and select and copy the existing rule you want to update.
  2. run exportfs –io <desired-options-here-from-exports>  (will export without changing /net/<filer-name>/vol/vol0/etc/exports)

If access from client OK, then insert/edit line in /net/<filer-name>/vol/vol0/etc/exports with <desired-options-here-from-exports>
If you messed things up, run exportfs -r to revert back.

Note: use the root= option only when needed, where root needs to write to the file system.  Root can still read from the netapp, but chowning or any other write operation may not work, and the file ownership may not translate.

E.   Set up CIFS exports for volumes/qtrees

In FilerView GUI:
        I.            Click CIFS
     II.            Click Shares
   III.            Click Add
  IV.            Fill in required fields (Share Name, Mount Point, Share Description)
     V.            Click Add
  VI.            Change access as necessary.
From the command line (filer local prompt):
cifs shares –add <share name> <path> -comment
FILER1> cifs shares –add test /vol/data/test –comment “this is a test volume”
Modify permission as necessary.
cifs access <share> [-g] <user|group> <rights>
FILER2>cifs access test bminit:dvrt rwx





Monday, October 31, 2011

Creating a New Virtual Machine on VMware 5.0


Today we will go through the process of creating a new virtual machine on the new VMware 5.0 infrastructure.  I wont go into the process of installing the host and connecting to it with vSphere, but that's pretty straight forward.

I will just expand on the parts that are different from VMware 4.X.(not much!)

Step 1: right click on the host, and choose "New Virtual Machine"


Step 2: Choose it Typical or Custom


Step 3:Choose a name for your VM



Step 4:  Choose a datastore (in the case here of our test server, there is only one datastore configured)


Step 5: Choose Guest Operating System


Step 6: Choose network connections


Step 7: Choose virtual disk size and provisioning policy



This is where it differs a little from VMware 4.x:
You have 4 options:
 .
Option
Action
Same format as source
Use the same format as the source virtual machine. (doesn't apply in this case)
Thick Provision Lazy Zeroed
Create a virtual disk in a default thick format. Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
Thick Provision Eager Zeroed
Create a thick disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks.
Thin Provision
Use the thin provisioned format. At first, a thin provisioned disk uses only as much datastore space as the disk initially needs. If the thin disk needs more space later, it can grow to the maximum capacity allocated to it.








( we will choose thin provisioning, and leave it at 16Gb, the installation takes about 7Gb so we should be fine)

Step 8: Summary


We now have the summary, but we want to edit this machine before submitting it as we want to add the boot drive as the CentOS ISO file.



Step 9:Assigning a .ISO image as the boot drive

Now we will see the screen below, I reduced the memory to 1Gb from the 2Gb that came by default, and chose the CentOS image to boot from:


note to check "connect at power on", but remember to check that off later, as when the VM is rebooted, it will boot from CD/DVD.


Final step: start up the machine, and install CentOS 6.2!









Edited by: Boaz Minitzer















Wednesday, May 18, 2011

Linux Redhat Clusters name resolution and Split Brain issues



How Broadcast Signaling Works
Placing this issue in the right perspective requires first understanding something about how broadcast signaling works. When a member is invoked, it is supposed to issue a broadcast message to the network. The member does so using its cluster name as an identifier. If there happens to be a cluster present on the network with the same cluster name, that cluster is expected to reply to that broadcast message with its node name. In that case, the joining member should send a join request to the cluster. The default port being used by the cluster for issuing broadcast messages is 5405.
You will see these messages when opening the NIC in promiscuous mode with either wireshark or tcpdump:

22:25:36.455369 IP server-01.5149 > server-02.netsupport: UDP, length 106
22:25:36.455531 IP server-02.5149 > server-01.netsupport: UDP, length 106
22:25:36.665363 IP server-01.5149 > 255.255.255.255.netsupport: UDP, length 118
22:25:36.852367 IP server-01.5149 > server-02.netsupport: UDP, length 106
22:25:36.852526 IP server-02.5149 > server-01.netsupport: UDP, length 106



How Name resolution works and cluster.conf

Cman (the Cluster Manager) tries hard to match the local host name(s) to those mentioned in cluster.conf. Here's how it does it:

1. It looks up $HOSTNAME in cluster.conf
2. If this fails it strips the domain name from $HOSTNAME and looks up that in cluster.conf
3. If this fails it looks in cluster.conf for a fully-qualified name whose short version matches the
short version of $HOSTNAME
4. If all this fails then it will search the interfaces list for an (ipv4 only) address that matches a
name in cluster.conf
cman will then bind to the address that it has matched.
Note: we will have to make sure the settings in /etc/nsswitch.conf are set to:
hosts:      files nis dns
nsswitch.conf is a facility in Linux operating systems that provides a variety of sources for common configuration databases and name resolution mechanisms.




Split Brain Issues

One of the most dangerous situations that can happen in clusters is that both nodes become active at the same time. This is especially true for clusters that share storage resources. In this case both cluster nodes could be writing to the data on shared storage which will quickly cause data corruption.
When both nodes becoming active it is called “split brain” and can happen when a cluster stops receiving heartbeats from its partner node. Since the two nodes are no longer communicating they do not know if the problem is with the other node or if the problem is with itself.
For example say that the passive node stops receiving heartbeats from the active node due to a network failure of the heartbeat network. In this case if the passive node starts the cluster services then you would have a split-brain situation.
Many clusters use a Quorum Disk to prevent this from happening. The Quorum Disk is a small shared disk that both nodes can access at the same time. Whichever node is currently the active node writes to the disk periodically (usually every couple of seconds) and the passive node checks the disk to make sure the active node is keeping it up to date.
When a node stops receiving heartbeats from its partner node it looks at the Quorum Disk to see if it has been updated. If the other node is still updating the Quorum Disk then the passive node knows that the active node is still alive and does not start the cluster services.
Redhat clusters support Quorum Disks, but Redhat support had recommended not to use one since they are difficult to configure and can become problematic. Instead they recommend to relying on Fencing to prevent split brain.

Friday, April 22, 2011

Various Java Monitoring tools and how to use them


  • Note: to be able to run any “X” Applications remotely, you may have to set you X server locally to  accept connections from the machine, or just allow all with a command like xhost +
  • If you are running this locally, make sure the server is in init level 5, and not 3 as by many servers are in level 3, you can do this by issuing “telinit 5” or just type “startx”
  • If you are running X through SSH, make sure at /etc/ssh/sshd_config you have X11forwarding set to yes

JvisualVM


JVisualVM is a profiling tool for profiling a running JVM. It comes bundled with the Oracle JRE 1.6+. It can be found in the %JAVA_HOME%\bin directory.

This is very useful for monitoring the memory, CPU and thread usage of a running JVM.
The tool provides for taking snapshots that can be analyzed later offline. It would be very useful to keep these snapshots.

The best way to run it is locally, the executable name is just jvisualvm. This way you get CPU and Heap profiling.  This requires tunneling an X session via SSH or exporting the display of the remote machine.
Connecting remotely provides less information than connecting locally. Notably, there is no CPU profiling.
There are two ways to run it remotely:

• JMX RMI connection
• jstatd RMI connection

• JMX RMI
Add the following JVM arguments when launching the java process:
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=12345
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false

Running jvisualvm locally or via an JMX RMI connection turns on a visual view of threads running inside the process. This is not available via a jstatd remogte connection

jstatd

jstatd is another tool that ships with the standard Oracle JRE in the bin directory. jstatd is a small server process that runs alongside the JVM providing instrumentation information to remote clients.
jstatd should be considered if JVisualVM cannot be run locally or if a JMX RMI connection cannot be established.
jstatd requires a security policy file.

Create a new text file called jstatd.all.policy in %JAVA_HOME%\bin with the following contents:

grant codebase "file:${java.home}/../lib/tools.jar" {
 permission java.security.AllPermission;
};

Run jstatd on the host machine with this command:
./jstatd -J-Djava.security.policy=jstatd.all.policy &

& on the end is for Linux only--gets the process to run in the background.

Then Run jvisualvm


jconsole

Allows remote management of a running JVM. Provides more information about a running java virtual machine and can changing some settings via JMX (logging, etc).
JMX is a connection level protocol which means there is more than one way to connect to the same JMX agent running inside the JVM.
• JMX API for connecting locally
• RMI – requires JVM arguments
• JMXMP – requires special configuration on the client side but can be more secure in a production environment
Running JConsole locally - JMX API
To connect just type jconsole on the command line and select the desired process from the list when JConsole loads.