Thursday, October 10, 2013

Quickly configuring password-less SSH between Unix Hosts

Sometimes, you need to do alot of stuff on dozens (in my case 65 nodes, all needed clustering and other configurations) so you would do the below to save you time.  There are security considerations though, so you want to probably reverse this at the end of your work.

So to make a long story short, these are the steps:

On the host you want to do this from, do the following:

cd ~/.ssh
ssh-keygen -t rsa
scp /root/.ssh/id_rsa.pub remote-host:/root/.ssh/authorized_keys

This is what it will look like:

[root@linuxhost101 ~]# cd ~/.ssh
[root@linuxhost101 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5f:27:29:4e:d9:87:99:02:5e:e7:ba:86:1e:7a:9d:c8 root@linuxhost101.domain.net
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|        . . .    |
|       . o = =   |
|        S = X o  |
|         + = +   |
|       ..++.     |
|       .E.+.     |
|      .o...      |
+-----------------+
[root@linuxhost101 .ssh]# 

Then in the above example, I wante to copy this to another 64 hosts:

[root@linuxhost101 .ssh]# scp /root/.ssh/id_rsa.pub linuxhost102:/root/.ssh/authorized_keys
The authenticity of host 'linuxhost102 (10.22.176.2)' can't be established.
RSA key fingerprint is 1d:fa:90:54:9b:a3:59:a7:f9:12:85:09:0a:67:1b:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'linuxhost102' (RSA) to the list of known hosts.
Red Hat Enterprise Linux Server release 6.2 (Santiago)
Kernel 2.6.32-220.el6.x86_64 on an x86_64

Password: 
id_rsa.pub                                                                                                                         100%  416     0.4KB/s   00:00    
[root@linuxhost101 .ssh]# scp /root/.ssh/id_rsa.pub linuxhost103:/root/.ssh/authorized_keys

That's it, now when you ssh or scp anything to the 2nd host from the first, it will not prompt you for a password.  
Of course if you want to do this from more than one host, then just add to the authorized_keys file rather than overwriting it.... 
(like this:  cat .ssh/id_rsa.pub | ssh root@192.168.3.102 'cat >> .ssh/authorized_keys'



IMPORTANT There is a bug in CentOS 6 / SELinux that results in all client presented certificates to be ignored when SELinux is set to Enforcing. To fix this simply:
[root@linux01 ~]# ssh root@192.168.3.102 'restorecon -R -v /root/.ssh'

Then it will work.  

Or you can just disable selinux altogether at  /etc/selinux/config : (you would then need to reboot)




* Addition: if you wanted to do this for multiple hosts, you could add the following in ~/.ssh/config:

Host *
    StrictHostKeyChecking no

or from command line: ssh -o StrictHostKeyChecking=no 

You then won't be prompted about whether you trust the host you are connecting to.

Monday, April 8, 2013

10 Steps to configuring an esxi 4.1 host to send traps to an OpenManage Server.

These are the steps for installing OMSA and related configuration for esxi 4.1 (was done on a blade server)

  1. Get the package from: http://ftp.us.dell.com/sysman/OM-SrvAdmin-Dell-Web-6.5.0-2247.VIB-ESX41i_A01.zip
  2. Move the package to the host either via scp or storage browser
  3. Run esxupdate update --bundle ./OM-SrvAdmin-Dell-Web-6.5.0-2247.VIB-ESX41i_A01.zip

    This is the output:

/OMSA # esxupdate update --bundle ./OM-SrvAdmin-Dell-Web-6.5.0-2247.VIB-ESX41i_A01.zip
Unpacking cross_oem-dell-openmanage-esxi_6.5-0000                                             ##################################################################################################################################### [100%]

Installing packages :cross_oem-dell-openmanage-esxi_6.5-0000                                  ##################################################################################################################################### [100%]

Running [cim-install.sh]...
ok.
Running [vmkmod-install.sh]...
ok.
Running [/sbin/esxcfg-secpolicy -p /etc/vmware/secpolicy]...
ok.

The update completed successfully, but the system needs to be rebooted for the
changes to be effective.

As you can see, we don’t have the string “CIMoem-dell-openmanage-esxiProviderEnabled “ until we reboot the host:

/OMSA # esxcfg-advcfg -l | grep CIM
/UserVars/CIMEnabled [Integer] : Enable or Disable the CIM service
/UserVars/CIMemulex-cim-providerProviderEnabled [Integer] : Enable or Disable the CIM emulex-cim-provider Provider
/UserVars/CIMlsi-providerProviderEnabled [Integer] : Enable or Disable the CIM lsi-provider Provider
/UserVars/CIMqlogic-fchba-providerProviderEnabled [Integer] : Enable or Disable the CIM qlogic-fchba-provider Provider
/UserVars/CIMvmw_hdrProviderEnabled [Integer] : Enable or Disable the CIM vmw_hdr Provider
/UserVars/CIMvmw_kmoduleProviderEnabled [Integer] : Enable or Disable the CIM vmw_kmodule Provider
/UserVars/CIMvmw_lsiProviderEnabled [Integer] : Enable or Disable the CIM vmw_lsi Provider
/UserVars/CIMvmw_swmgtProviderEnabled [Integer] : Enable or Disable the CIM vmw_swmgt Provider
/OMSA #


  5.       Reboot host: (this takes about 5 minutes.)

  6.       Run esxcfg-advcfg --set 1 /UserVars/CIMoem-dell-openmanage-esxiProviderEnabled

                ~ # esxcfg-advcfg --set 1 /UserVars/CIMoem-dell-openmanage-esxiProviderEnabled
Value of CIMoem-dell-openmanage-esxiProviderEnabled is 1
~ #

You can also do this though the VIC:



You can verify the services are running as such:

~ # /usr/lib/ext/dell/srvadmin/bin/dataeng status
dsm_sa_datamgrd (pid 6422 ) is running
dsm_sa_eventmgrd (pid 10466 ) is running
dsm_sa_snmpd (pid 10578 ) is running
~ #

(note: the check if the OMSA package is installed, run the following) :

~ # esxupdate query
---------Bulletin ID--------- -----Installed----- --------------Summary---------------
ESXi410-201101223-UG          2011-10-11T04:56:54 3w-9xxx: scsi driver for VMware ESXi
ESXi410-201101224-UG          2011-10-11T04:56:54 vxge: net driver for VMware ESXi    
Dell_OpenManage_ESXi410_OM650 2013-03-22T17:17:38 OpenManage 6.5 for ESXi410          
~ #

  7.       Check if SNMP is enabled on the host: (run this from the CLI) (I took out the client identifying information)

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root --show 
Current SNMP agent settings:
Enabled  : 0
UDP port : 161

Communities :
Notification targets :

We now have to set the community string, and to enable the agent, as you can see there is no entry under communities, and the enabled value is 0 (zero)

To Enable:

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root -E
Enabling agent...
Complete.

To Set community:

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root -c public
Changing community list to: public...
Complete.
  8.       As you can see in step 7, SNMP trap destination is not configured so we run this:

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root -t 10.10.10.10@162/public
Changing notification(trap) targets list to: 10.10.10.10@162/public....
Complete.

  9.       Let’s now check it’s done:

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root --show
Current SNMP agent settings:
Enabled  : 1
UDP port : 161

Communities :
public

Notification targets :
10.10.10.10@162/public

  10.   We can now send a test, and then check with the OpenManage server if it was received:

C:\Program Files (x86)\VMware\VMware vSphere CLI>vicfg-snmp.pl --server HOSTNAME --username root --password root --test
Sending test nofication(trap) to all configured targets...
Complete. Check with each target to see if trap was received.

Done!  now go and discover this server with Dell Open Manage Essentials, you need to do a WSMAN discovery though, take a look here: 

http://en.community.dell.com/techcenter/systems-management/f/4494/t/19424735.aspx

Thursday, March 7, 2013

Migrating MediaWiki to Confluence

So Today I will be moving a wiki that's sitting on a server running MediaWiki with a MySQL database, into a Confluence wiki, also running with a MySQL server.

Our Pre-Requisites are as follows:

1. Access to the MediaWiki directory and SQL Server (at least read access)
2. A DEV or QA server with MySQL server installed (where we'll also put the UWC Converter)
3. The Confluence instance, preferably a QA one, not the production one.

First, we will need the images directory from the MediaWiki server, which you can verify by opening up the LocalSettings.php file and looking for this line:

$wgUploadPath       = "$wgScriptPath/images";

A few lines above it you will see a line like this:

$wgScriptPath       = "/somewiki";

and that will tell you where the images directory is.  In this case it will be /opt/somewiki/images

Tar up this whole directory:

[root@mediawiki ]# tar cvf all_wiki_images.tar .

now we need the SQL server dump, which we need to first check which database it uses and which username/password are being used.

Again, look for these lines in the same file:

$wgDBserver         = "localhost";      
$wgDBname           = "mediawikidb";                 
$wgDBuser           = "db_user";
$wgDBpassword       = "db_pass";
$wgDBprefix         = "";
$wgDBtype           = "mysql";
$wgDBport           = "5432";         



So as you can see we have all the info we need, we can now do a dump as following:

[root@dev01 ]#  mysqldump --single-transaction -u db_user -pdb_pass  mediawikidbdb > wikidumb-mediawikidb-03-07-2013.sql

This will give you the file which we will now import into the MySQL database on the QA server.
At this point we have finished working on the original server, you may want to put a notice for users not to add any content, or just make the whole directory mode 700 (assuming your webserver user doesnt own the directory!)

We will now need to create a database for this MediaWiki DB on the QA server:

mysql> create database mediawikidb;
mysql> GRANT ALL PRIVILEGES ON mediawikidb.* TO db_user@localhost IDENTIFIED BY 'db_pass';

Then let's import the database dump into the MySQL server:

From the command line (not the MySQL console) run:

[root@dev01 ]# mysql -u db_user -p mediawikidb < wikidumb-mediawikidb-03-07-2013.sql

And then it will prompt you for a password, after which depending on how large this dump is, it will import it in.

I can now check that everything is there:

mysql> SELECT table_schema "database_name",     sum( data_length + index_length ) / 1024 /1024 "Data Base Size in MB",sum( data_free )/ 1024 / 1024 "Free Space in MB" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+----------------------+------------------+
| database_name      | Data Base Size in MB | Free Space in MB |
+--------------------+----------------------+------------------+
| c4_1               |       25047.82812500 |     590.00000000 |
| information_schema |           0.00781250 |       0.00000000 |
| mysql              |           0.63066769 |       0.00000000 |
| mediawikidb         |         127.62810230 |     420.00000000 |
+--------------------+----------------------+------------------+
4 rows in set (7.53 sec)

mysql>

And as you can see, I highlighted the newly imported database, it's 127Mb, and ready to go.

Now we will set up the UWC Exporter/Converter to export the mediawiki from the database, match it with the images in the images directory we copied and import it all into confluence.

We will now need to go into confluence and create a new space, let's call it mediawiki01 that will also be the space key.



 We will need to create a file in the conf\ directory which  will look like this:

#Tue Feb 12 07:24:53 PST 2013
current.tab.index=0
space=mediawiki01
url=10.18.97.228\:8090
trustpass=
pages=/home/boaz/mediawiki01-wiki/pages
uploadOrphanAttachments=false
pageChooserDir=
attachments=/home/boaz/mediawiki01-wiki/images
trustall=
attachment.size.max=-1
sendToConfluence=true
pattern=
login=admin
truststore=
feedback.option=true
password=password
wikitype=mediawiki

Here is an explanation of the lines relevant in this tutorial: (you can click on the image to see full size)



Then we will need to edit the file called converter.mediawiki.properties, if you want to tweak any settings, in my case I wanted all the original users that created the page (if you dont turn this on, all the pages will be owned by the user importing them into the confluence space) as well as the page histories.

To include user and timestamp data with page histories:

Export your mediawiki data with the udmf property set to true. In your exporter.mediawiki.properties 

1. uncomment in the file conf\exporter.mediawiki.properties file


udmf=true




2. Install the UDMF Plugin on your Confluence instance. (* important - the username/create date will not work unless this plugin is installed)

3. In the converter.mediawiki.properties file (under conf\ ) uncomment:
Mediawiki.0004.userdate.class=com.atlassian.uwc.converters.mediawiki.UserDateConverter


4. Optionally, in your converter.mediawiki.properties, if the users in your mediawiki are not going to be exactly the same users (ie, using the same LDAP or AD), then uncomment and set to false: 

Mediawiki.0004.users-must-exist.property=false




You will also need to edit the conf\exporter.mediawiki.properties file it's pretty self explanatory, here is an example:




One step to complete before you start the import, is to make sure you are allowing

Go into "General Configuration" and look for a checkmark by "Remote API (XML-RPC & SOAP)"
If there isnt one, make sure to add it, otherwise the import wont work.




Then we will run the convert/export like this:

[root@dev01 ]#./run_cmdline.sh conf/confluence.mediawiki-with-history conf/converter.mediawiki.properties


You will see many entries roll by, and finally after the conversion is done it will be uploading like this:

13-03-06 21:28:32,888 INFO  [main] - Uploaded 2200 out of 6613 pages.
2013-03-06 21:28:36,269 INFO  [main] - Uploaded 2210 out of 6613 pages.
2013-03-06 21:28:39,702 INFO  [main] - Uploaded 2220 out of 6613 pages.
2013-03-06 21:28:42,993 INFO  [main] - Uploaded 2230 out of 6613 pages.
2013-03-06 21:28:46,214 INFO  [main] - Uploaded 2240 out of 6613 pages.
2013-03-06 21:28:49,487 INFO  [main] - Uploaded 2250 out of 6613 pages.
2013-03-06 21:28:52,771 INFO  [main] - Uploaded 2260 out of 6613 pages.
2013-03-06 21:28:56,051 INFO  [main] - Uploaded 2270 out of 6613 pages.
2013-03-06 21:28:59,435 INFO  [main] - Uploaded 2280 out of 6613 pages.
2013-03-06 21:29:02,735 INFO  [main] - Uploaded 2290 out of 6613 pages.
2013-03-06 21:29:06,083 INFO  [main] - Uploaded 2300 out of 6613 pages.
2013-03-06 21:29:09,476 INFO  [main] - Uploaded 2310 out of 6613 pages.
2013-03-06 21:29:12,878 INFO  [main] - Uploaded 2320 out of 6613 pages.
2013-03-06 21:29:16,218 INFO  [main] - Uploaded 2330 out of 6613 pages.
2013-03-06 21:29:19,468 INFO  [main] - Uploaded 2340 out of 6613 pages.
2013-03-06 21:29:23,538 INFO  [main] - Uploaded 2350 out of 6613 pages.
2013-03-06 21:29:26,804 INFO  [main] - Uploaded 2360 out of 6613 pages.
2013-03-06 21:29:29,785 INFO  [main] - Uploaded 2370 out of 6613 pages.
2013-03-06 21:29:32,740 INFO  [main] - attachment written JPAValidationSampleRegEx.png
2013-03-06 21:29:32,740 INFO  [main] - Attachment Uploaded: /home/boaz/mediawiki/images/images/9/9f/JPAValidationSampleRegEx.png
2013-03-06 21:29:32,773 INFO  [main] - attachment written JPAValidationSampleTester.png
2013-03-06 21:29:32,773 INFO  [main] - Attachment Uploaded: /home/boaz/mediawiki/images/images/6/63/JPAValidationSampleTester.png
2013-03-06 21:29:32,808 INFO  [main] - attachment written JPAValidationSample.png
2013-03-06 21:29:32,808 INFO  [main] - Attachment Uploaded: /home/boaz/mediawiki/images/images/9/96/JPAValidationSample.png
2013-03-06 21:29:32,843 INFO  [main] - Uploaded 2380 out of 6613 pages.
2013-03-06 21:29:35,873 INFO  [main] - Uploaded 2390 out of 6613 pages.



That's all, now you have the MediaWiki server sitting inside a new space on a Confluence wiki.



Thursday, January 31, 2013

Setting up a NetApp FAS3020 and a DS14MK4 Shelf

This post is about setting up a NetApp FAS3020 with a Fiber Disk Shelf (the DS14MK4), being that I got these from two different places, the Head unit doesnt have an OS on it, and the shelf is "owned" by a different filer....

So in order to make this usable, we first need to connect it, as you can see in the picture below:







So as you can see, we have the blue cisco cable, which went to COM2 on my PC, you can download putty
if you dont have it already from here  and set it to 9600, 1 stop bit, no parity. as you can see in the picture below:


As far as the connectivity goes, you will need some GBIC's, 4Gb in this case as the FAS3020 only supports 4Gb, these would look like this:


Then connect a LC/LC fiber channel cable from port 0a or 0b and on the Shelf to the down arrow

  
If you connect another shelf, then you use the "UP" arrow and put a cable from there to the "DOWN" arrow on the next shelf.  

Then you can power up the filer and shelf, and look in the Putty window.

First off, you want to do a "license show" and to copy that somewhere, as that's your licenses for this filer, you will need that later on, after you install OnTap.


Since we have to take ownership of the disks and load up OnTap, you do a CTRL+C at boot time, then you choose number 5 which is maintenance mode, then type disk show –v

And it will also show you the system ID, the disks will show you in the output the system ID they belong to now, so you do:  disk reassign -s old_sysid -d new_sysid 

Then do mailbox destroy local the Mailbox disks store configuration information, and finally do a halt. 

Then you will be back in maintenance mode, do a “bye” and it will reboot, again CTRL+C, then choose option 4a, and it will do what it’s doing in the screenshot below:

  
You do this same procedure if a head unit dies and you have to change it, except it gets a little more hairy if there are two heads and they are clustered, you can read about that in this document. You will need a NetApp login, but it's just a registration.

As you can see in the screenshot, it say's it will take 1090 minutes, which is over 18 hours, however this was completed in about 1-2 hours.

Finally, we will install OnTap on this,  first put a copy of OnTap on a http server, in my case I set up a Linux
VM on a Vmware workstation:

FAS3020c> software update http://192.168.240.34/files/737_setup_e.exe -f
software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
software: copying to 737_setup_e.exe
software: 100% file read from location.
software: /etc/software/737_setup_e.exe has been copied.
software: installing software, this could take a few minutes...
software: Data ONTAP Package Manager Verifier 1
software: Validating metadata entries in /etc/boot/NPM_METADATA.txt
software: Checking sha1 checksum of file checksum file: /etc/boot/NPM_FCSUM-pc_elf.sha1.asc
software: Checking sha1 file checksums in /etc/boot/NPM_FCSUM-pc_elf.sha1.asc
software: installation of 737_setup_e.exe completed.
Thu Jan 31 06:29:12 GMT [rc:info]: software: installation of 737_setup_e.exe completed.
Thu Jan 31 06:29:12 GMT [download.request:notice]: Operator requested download initiated

download: Downloading boot device
download: If upgrading from a version of Data ONTAP prior to 7.3, please ensure
download: there is at least 3% of available space on each aggregate before
download: upgrading.  Additional information can be found in the release notes.
Version 1 ELF86 kernel detected.
..................................................
download: Downloading boot device (Service Area)
...........................
Thu Jan 31 06:39:11 GMT [download.requestDone:notice]: Operator requested download completed
Thu Jan 31 06:39:11 GMT [kern.shutdown:notice]: System shut down because : "reboot".
 

CFE version 3.1.0 based on Broadcom CFE: 1.0.40
Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.
Portions Copyright (c) 2002-2006 Network Appliance, Inc.

CPU type 0xF29: 2800MHz
Total memory: 0x80000000 bytes (2048MB)



 And now, you may have to change the password, so type "password" and change the password for root,
it will ask you a bunch of questions, answer them as far as configuring the IP address, Subnet Mask and Gateway, then connect to the FilerView interface through a browser as below:


One note, is that your filer now has no licenses, and you need to get those now and add them, as well as do the other configurations, some of which are on this blog.


Moving Confluence between 2 Linux Servers (Confluence 4.0.3 and MySQL 5.x)

Today we will upgrade a large confluence installation (over 2Gb files and a 3Gb Database)

Since this is a production confluence, we will first want to create a QA environment, where we will recreate
the running installation and make sure nothing goes wrong!

Step 1:  We first create 2 VM's, 4Gb memory each, running CentOS:










As you can see, one VM is going to be the Confluence host, and the other will host MySQL.

Step 2:  We copy the confluence configuration and data directories to the QA VM:

cat <confluence installation directory>/WEB-INF/classes/confluence-init.properties

this file will show you where the confluence home is, look for something that looks like this:

confluence.home = /opt/confluence/confluence-data

You can run this little script to find out various information about the confluence installation:


[root@01 bin]# /<confluence dir>/bin/version.sh 
If you encounter issues starting up Confluence Standalone, please see the Installation guide at http://confluence.atlassian.com/display/DOC/Confluence+Installation+Guide
Using CATALINA_BASE:   /opt/confluence
Using CATALINA_HOME:   /opt/confluence
Using CATALINA_TMPDIR: /opt/confluence/temp
Using JRE_HOME:        /opt/confluence/jre/
Using CLASSPATH:       /opt/confluence/bin/bootstrap.jar
Using CATALINA_PID:    /opt/confluence/work/catalina.pid
Server version: Apache Tomcat/6.0.32
Server built:   February 2 2011 2003
Server number:  6.0.32.0
OS Name:        Linux
OS Version:     2.6.18-308.16.1.el5
Architecture:   amd64
JVM Version:    1.6.0_26-b03
JVM Vendor:     Sun Microsystems Inc.
[root@01 bin]# 

So this was easy, we just tar up the directories and move them over to our VM:

tar cvf confluence.tar .
tar cvf confluence-data.tar .


Step 3:  getting a MySQL database dump (for a Postgres version, you can see my post here )

Since this is a production database, the default mysqldump setting will lock the tables during the dump and this might cause problems with the confluence instance using the database. There are different things that can be done to avoid this (like backup from a slave mysql, using mysqlhotcopy, etc.) but if the tables are InnoDB then we can add the parameter –single-transaction to mysqldump and avoid this issue.

You can check cat /etc/my.cnf and see if you are running InnoDB tables or not.  In our case we are, so we issue this command:

# mysqldump --single-transaction -u root -p  c4_1 > Confluence_backupfile-11-26-2012.v2.sql

However, before this step, let's make sure we are dumping the right database, so we have to log into the MySQL console, and check the databases to make sure.  (In the case above, I already did, so I know it's the c4_1 database!)


[root@02 ~]#  mysql -h localhost -u root -p

Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13470
Server version: 5.0.95-log Source distribution

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

I run this command: 

SELECT table_schema "database_name",     sum( data_length + index_length ) / 1024 /1024 "Data Base Size in MB",sum( data_free )/ 1024 / 1024 "Free Space in MB" FROM information_schema.TABLES GROUP BY table_schema;

So I can get the sizes in human form... :)



So as you can see, c4_1 is 23.9Gb.... so this will take a little time.

After we have this file, and it looks like this(!):

[root@02 ]# ls -lah
total 42G
drwxrwxrwx  8 root            root         4.0K Nov 26 19:21 .
drwxr-xr-x 31 root            root         4.0K Oct  9 22:09 ..
-rw-r--r--  1 root            root          24G Nov 26 21:03 Confluence_backupfile-11-26-2012.v2.sql
[root@02 ]# 

We will now need to import it into the QA MySQL server.  

So since this is a new CentOS 6.2:

[root@MySQL-QA ~]# cat /etc/redhat-release 
CentOS release 6.2 (Final)
[root@MySQL-QA ~]# 

I already installed MySQL during the setup wizard in the beginning, so we should have a working
installation, I will start it up, and it will bring up its script:

[root@MySQL-QA ~]# /etc/init.d/mysqld start
Initializing MySQL database:  WARNING: The host 'MySQL-QA' could not be looked up with resolveip.
This probably means that your libc libraries are not 100 % compatible
with this binary MySQL version. The MySQL daemon, mysqld, should work
normally with the exception that host name resolving will not work.
This means that you should use IP addresses instead of hostnames
when specifying MySQL privileges !
Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h MySQL-QA password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

                                                           [  OK  ]
Starting mysqld:                                           [  OK  ]


So as you can see, it's asking to run /usr/bin/mysql_secure_installation which we will do now:

[root@MySQL-QA ~]# /usr/bin/mysql_secure_installation




NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!


In order to log into MySQL to secure it, we'll need the current
password for the root user.  If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MySQL comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...



All done!  If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!
[root@MySQL-QA ~]# 

Now if we check our databases as we did earlier we'll see that we are brand new:



So we will need to create the database and then import the data to it:


mysql> create database c4_1;
mysql> GRANT ALL PRIVILEGES ON c4_1.* TO root@localhost IDENTIFIED BY 'password01';


let's see that it's there:


mysql> show databases ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| c4_1               |
| mysql              |
+--------------------+
3 rows in set (0.00 sec)

mysql>


Ok, let's see what privilidges our users have:

select * from mysql.user where User='root';

or for a more clear version :

SHOW GRANTS FOR 'root'@'localhost';



mysql> SHOW GRANTS FOR 'root'@'localhost';
+----------------------------------------------------------------------------------------------------------------------------------------+
| Grants for root@localhost                                                                                                              |
+----------------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*EFAF6C387B0DCF3A00F47270618E0D8DF69B7C79' WITH GRANT OPTION |
| GRANT ALL PRIVILEGES ON `c4_1`.* TO 'root'@'localhost'                                                                                 |
+----------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> 



Ok, now we can go on to importing the database, which we will do like this:

mysql -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]

so, 



[root@MySQL-QA boaz]# mysql -u root -p c4_1 < Confluence_backupfile-11-26-2012.sql

I recommend you take the /etc/my.cnf settings from the original server, otherwise you may run into some trouble if it's not a typical installation.  In my case I got an error like this:


ERROR 1598 (HY000) at line 41: Binary logging not possible. Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'

So this error is discussed over here but the jist of it, try to copy the settings in the production database.

You can check there is progress being made by logging into another tty and going into the SQL console:


















As you can see above, we are now at 1800Mb, so another 22Gb or so to go...

Now we want to make sure we can connect to the MySQL DB remotely, so we need to add a binding in the /etc/my.cnf file, like this:

bind-address=172.18.97.71 (or whatever the server IP is)

and then restart the server:

/etc/init.d/mysqld restart

now most likely you have iptables running on it, so you want to stop or disable it, in my case I stop it then disable it:


















then check if you can telnet to port 3306 from the Confluence server:
initially as you can see, it gave me a "No route to host"  but after disabling iptables, its working:

Next we will need to allow the other machine's IP/Hostname to connect to the MySQL DB, and we do this as follows:

Connect to the MySQL console:

 [root@MySQL-QA boaz]# mysql -h localhost -u root -p

Allow root or whichever user you're using to connect from whichever IP/Hostname:

mysql> GRANT ALL ON *.* to root@'172.18.97.68' IDENTIFIED BY 'password01';
Query OK, 0 rows affected (0.05 sec)



Step 4: Configuring Confluence to connect to the DB and get it working.

First off, edit the server.xml file and change the connection to the database:

<Resource name="jdbc/confluence" auth="Container" type="javax.sql.DataSource"
         username="root"
         password="password01"
         driverClassName="com.mysql.jdbc.Driver"
         url="jdbc:mysql://mysql-qa:3306/c4_1?useUnicode=true&amp;characterEncoding=utf8"
         maxActive="15"
         maxIdle="7"
         defaultTransactionIsolation="READ_COMMITTED"
         validationQuery="Select 1" />

                </Context>
            </Host>

the bolded/highlighted host name was the production database previously.  I added this hostname in /etc/hosts so it can resolve.

You can use the script below to put in /etc/init.d/  call it confluence or something:

#!/bin/sh
#
# Startup script for jakarta tomcat
#
# chkconfig: - 85 20
# description: Confluence running
# processname: confluence
# pidfile: /opt/kb/confluence4/work/catalina.pid # config:# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 0

# Set Tomcat environment.
export JAVA_HOME=/opt/kb/confluence4/jre
export JRE_HOME=/opt/kb/confluence4/jre
export CATALINA_HOME=/opt/kb/c4
export CATALINA_TMPDIR=/opt/kb/confluence4/temp
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$CATALINA_HOME/lib/servlet-api.jar:$CATALINA_HOME/bin/bootstrap.jar
#export CATALINA_OPTS="-Dbuild.compiler.emacs=true -Xms2048m -Xmx4096m -XX:MaxPermSize=1024m"
export CATALINA_OPTS="-Xms3072m -Xmx3072m -XX:MaxPermSize=1024m"
export PATH=$JAVA_HOME/bin:$PATH:/usr/bin:/usr/lib/bin

case "$1" in
start)
        # Start daemon.
        echo -n "Starting Confluence: "
        /opt/kb/confluence4/bin/start-confluence.sh
        RETVAL=$?
        echo
        [ $RETVAL = 0 ] && touch /var/lock/subsys/confluence ;;
stop)
        # Stop daemons.
        echo -n "Shutting down Confluence: "
        /opt/kb/confluence4/bin/stop-confluence.sh
        RETVAL=$?
        echo
        [ $RETVAL = 0 ] && rm -f /var/lock/subsys/confluence ;;
restart)
        $0 stop
        $0 start
        ;;
condrestart)
        [ -e /var/lock/subsys/confluence ] && $0 restart ;;
status)
        status confluence
        ;;
        *)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
esac
exit 0


You also need to add a user confluence, do so by doing the following:

[root@confluence-QA boaz]# adduser  -c "Confluence User" -p "password01" -d "/home/confluence" confluence

You can put whichever password you want of course, this is just for show.


One note, typically confluence will start on a port higher than 1024 because it's running Tomcat on the back-end.  so the port you will need to connect to can be found in server.xml, typically 8090.

So in order to do that and not have iptables block that port, disable iptables on the confluence box as well,
or put in a rule to allow TCP for 8090.

That's it, you should now have a working copy of confluence on the new server!

Start it up:  /etc/init.d/confluence start

and watch the logs at <directory>/logs/atlassian-confluence.log  and catalina.out

when you see in catalina.out for example:

INFO: Server startup in 220538 ms

then you can go to the browser and put in the URL for the server.