Wednesday, January 30, 2019

How to have AWS Config send an SNS notification if a VPC's allows unrestricted incoming SSH traffic.


This is to have AWS Config send an SNS notification if a VPC's allow unrestricted incoming SSH traffic.


General Overview:

In order to configure a region to start recording AWS Config, you need to enable it.  If you have many accounts and every account at this point has 15 regions, that's alot of work on GUI, so we will
do most using CLI.  
Steps:
  1. Configure an S3 bucket to hold all the awsconfig data.
  2. Configure SNS Topic for this in each region
  3. Create an IAM Role for AWS Config to send to S3/SNS/Get the data
  4. Enable aws config per account in each region.
  5. Create an Aggregator in the main account to receive all the data from all the other accounts/regions
  6. Authorize this above aggregator in each and ever account/region
  7. Test that it works.

S3:

  1. We need to create a S3 bucket to hold all these configurations.  Since S3 is a single name across all AWS, I created a bucket called awsconfig-bucket in one account and allowed all the other accounts to write to it.  
    1. we need to allow all the other accounts to write to the bucket
    2. You can get the canonical account ID by doing this:  
      | => aws s3api list-buckets --profile xxx | grep OWNER
      OWNER XXX-XXXX 8d1da5691369aa323454339960db71b048b21de4db6f6b8b698e83df0c27129b4

    3. Next, we need the bucket policy set on the S3 bucket, For a bucket to receive log files from multiple accounts, (ref: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-set-bucket-policy-for-multiple-accounts.html ) otherwise you will get an error like: 
      An error occurred (InsufficientDeliveryPolicyException) when calling the PutDeliveryChannel operation: Insufficient delivery policy to s3 bucket

We'll put this policy in, inline or call it something:

  1. {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AWSCloudTrailAclCheck20131101",
          "Effect": "Allow",
          "Principal": {
            "Service": "config.amazonaws.com"
          },
          "Action": "s3:GetBucketAcl",
          "Resource": "arn:aws:s3:::awsconfig-bucket"
        },
        {
          "Sid": "AWSCloudTrailWrite20131101",
          "Effect": "Allow",
          "Principal": {
            "Service": "config.amazonaws.com"
          },
          "Action": "s3:PutObject",
          "Resource": [
            "arn:aws:s3:::awsconfig-bucket/AWSLogs/xxxxxxx/*",
            "arn:aws:s3:::awsconfig-bucket/AWSLogs/xxxxxxx/*",
            "arn:aws:s3:::awsconfig-bucket/AWSLogs/xxxxxxx/*" 
          ],
          "Condition": {
            "StringEquals": {
              "s3:x-amz-acl": "bucket-owner-full-control"
            }
          }
        }
      ]
    }
  2. AWS CONFIG


    1. the command to enable awsconfig is like this:
      aws configservice subscribe --s3-bucket awsconfig-bucket --sns-topic arn:aws:sns:us-west-2:xxxxxxxxxxx:awsconfig --iam-role arn:aws:iam::xxxxxx:role/aws_config_s3_role --profile xxxxxx --region us-west-2
      Therefore we have to enable all the components in the command, i.e SNS and S3 and configure the IAM role:

      SNS: 

      You have to have an SNS topic, otherwise it gives you an error.  So we create "dummy" SNS topics just for the command to work:
      for z in `cat list_of_regions.txt` ; do echo "aws sns create-topic --name awsconfig --profile `echo $PROFILE` --region $z"; done

    So we get something like this:

    aws sns create-topic --name awsconfig --profile xxxxxxx --region us-west-2
    Then we need to create this for every region in every account. (using the bash one liner)
  3. IAM: 

    1. Next we create an IAM role per account that grants AWS Config permissions to access the Amazon S3 bucket, access the Amazon SNS topic,  and get configuration details for supported AWS resources.   Lets call this role awsconfig_role We attach 3 policies to this user: AWSConfigRoleAmazonSNSRole (AWS managed policies) and create a policy to write to the S3 bucket:

| => cat s3_policy.json 
{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Sid": "VisualEditor0",

            "Effect": "Allow",

            "Action": "s3:*",

            "Resource": [

                "arn:aws:s3:::awsconfig-bucket",

                "arn:aws:s3:::*/*"

            ]

        }

    ]

}

Also in the trust relationship, we add this policy in JSON:
| => cat trust_relationship_awsconfig.json 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "config.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

This is so that AWSConfig can assume this role to do all its stuff.
We can do this in this CLI:
aws iam create-role --role-name awsconfig_role --assume-role-policy-document file://trust_relationship_awsconfig.json --profile XXXX
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AWSConfigRole --role-name awsconfig_role --profile XXXX
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSNSRole --role-name awsconfig_role --profile XXXX
aws iam put-role-policy --role-name awsconfig_role --policy-name S3_AWSCONFIG_BUCKET --policy-document file://s3_policy.json --profile XXXX
so now we use the same configuration to create this role in all 23 accounts.
for z in `cat list_of_accounts.txt` ; do echo "aws iam create-role --role-name awsconfig_role --assume-role-policy-document file://trust_relationship_awsconfig.json --profile $z"; done
for z in `cat list_of_accounts.txt` ; do echo "aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AWSConfigRole --role-name awsconfig_role --profile $z"; done
for z in `cat list_of_accounts.txt` ; do echo "aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSNSRole --role-name awsconfig_role --profile $z"; done
for z in `cat list_of_accounts.txt` ; do echo "aws iam put-role-policy --role-name awsconfig_role --policy-name S3_AWSCONFIG_BUCKET --policy-document file://s3_policy.json --profile $z"; done

AGGREGATOR

5. Next we need to create an aggregator in the main account for all the accounts and all regions. ( we do this now so it sends authorizations to all accounts/regions)
aws configservice put-configuration-aggregator --configuration-aggregator-name XXXX_ALL --account-aggregation-sources "[{\"AccountIds\": [\"123456789\",\"12343333333\",\"AccountID3\"],\"AllAwsRegions\": true}]" 
(you can put all the accountID's you need, I put all of them)
You can check that this worked by doing this:  
| => aws configservice describe-configuration-aggregators
CONFIGURATIONAGGREGATORS arn:aws:config:us-east-1:12344555445:config-aggregator/config-aggregator-aphkgdyd XXXX_ALL 1546544013.02 1547232995.08
ACCOUNTAGGREGATIONSOURCES True
ACCOUNTIDS 123456789112
ACCOUNTIDS 123456789113
ACCOUNTIDS 123456789114
ACCOUNTIDS 123456789115
ACCOUNTIDS 123456789116
ACCOUNTIDS 123456789117
... there rest of the accounts came here, I cut them out for the purpose of this page.

6. Finally, after adding all the source accounts, authorizations are sent everywhere (all regions of every account) so we need to authorize all of them:
This is what it looks like from the GUI (however we don't want to log into ALL accounts * 16 regions, hundreds of times to do this !) 

We do this with CLI:
This is for example to authorize the region of us-east-1 in the account of 12344555445
aws configservice put-aggregation-authorization --authorized-account-id 12344555445 --authorized-aws-region us-east-1 --profile 12344555445 --region us-east-1

Friday, March 16, 2018

Azure bits and pieces - CSV Dump of VMs and Private/Public IP's

Getting a list of Azure VM's and IP's


You would think this would be easier, and it's relatively easy from the GUI, but not if you have over 50 VM's or so.  As of today (March 2018) there is no option to export to CSV or the likes from the interface, so doing this through PowerShell is the solution.

Note, I had many issues with this Azure CloudShell (see pic) many commands simply don't work or are work as expected in Cloud shell, however on my Mac with PS installed they work fine.



Attached is a script I use to get any stats on VM's I need, see screenshot below.



You can download this script here as well

Tuesday, April 4, 2017

Wednesday, June 8, 2016

Getting a list of patches for your ESX hosts through update manager PowerCLI

So you have alot of VMware hosts in your environment, and want to get an actual list of all the patches and upgrades needed?

Well, I know you can get that in the compliance view in the update manager plugin in vCenter, however there is no way to print it from there.

So you can easily do it from PowerCLI, however you need to also install the VUM - VMware Update Manager PowerCLI,, this is on top of your regular VMware PowerCLI:


You can find the version you need here:

https://communities.vmware.com/community/vmtn/automationtools/powercli/updatemanager 

For me, it was version 6, so install that, and then connect to your vCenter and you can run this PowerCLI script:

ForEach ($HostToCheck in Get-VMHost){
$Details = Get-Compliance $HostToCheck -Detailed| Select -ExpandPropertyNotCompliantPatches| Select @{N="Hostname";E={$HostToCheck}}, Severity,IdByVendor, ReleaseDate, Description, Name
$ComplianceResult += $Details
}
$ComplianceResult | Export-CSV -Path c:\temp\NeededPatches.CSV -NoType



That will create a CSV file in c:\temp that will have information like this:




That's about it, you now have a detailed list of hosts/patches needed with a URL to the VMware KB for a description of the patch.



Tuesday, July 7, 2015

Add Multiple Hosts to vCenter and other PowerCLI snippets

Once in a while you may need to add a whole chassis of 16 blades, or even 5 chassis's to vCenter, and I'm pretty sure you don't want to do it manually 90 times....

So here it goes:

1.  Make a file, call it vcenter-hosts.txt or whatever you want, and put it in c:\temp, put all your hosts that you want to enter.  Mine looks like this:

vmhost-la01-ch01-bl01.dvirt.net
vmhost-la01-ch01-bl02.dvirt.net
vmhost-la01-ch01-bl03.dvirt.net
vmhost-la01-ch01-bl04.dvirt.net

2.  Connect via PowerCLI to your vCenter and issue this command:

Get-Content c:\temp\vcenter-hosts.txt | Foreach-Object { Add-VMHost $_ -Location (Get-Datacenter LosAngeles01) -User root -Password changeme-RunAsync -force:$true}

You will see this:



On the vCenter it will look like this:



That's it.

Of course your hosts need to be resolved by the vCenter, or you will get a nice error like this:

Add-VMHost : 6/22/2015 8:23:45 PM    Add-VMHost        Cannot contact the
specified host (host01.blah.net). The host may not be available on
the network, a network configuration problem may exist, or the management
services on this host may not be responding.
At line:1 char:45



After you add all these hosts, you may want to use Ansible to configure them all, or if you rather, you can do some stuff such as set hostname and DNS and others via command line such as below:

Set ESXi hostname via Command line (SSH directly to the host)
esxcli system hostname set --host=esxi08.abcdomain.net

Set ESXi search domains: (SSH directly to the host)
esxcli network ip dns search add  -d yahoo.com domain.local

Set up nameserver/s: (SSH directly to the host)
esxcli network ip dns server add  -s 4.2.2.2


Another issue that may come up (especially if you use Ansible) is that you want to name all your datastores the same thing, if they are not, or you want to name them a good name, this would be the command in PowerCLI:

get-vmhost esxi08.abcdomain.net |  get-datastore | set-datastore -name esxi08-local

It would look like this:



However you need to do this when the host is NOT in vCenter.  When you import say 16 hosts into vCenter, the first one will have its datastore called "datastore1" then the next one will be datastore1 (1) and the one after that datastore1 (2) and so on.  example:


So in order for ansible to work, when it's expecting datastore1, you need to rename the datastore to that (or just leave it if you didnt bring it into vCenter) Once you remove it from vCenter, the name remains, but then you can use the command above to change it back or change it to whatever name you want.





Monday, June 29, 2015

Deploying multiple Windows VM's from template (powerCLI)


Unlike this post, which talks about deploying Linux VM's, this one is about deploying Windows VM's which is a little different.

Your PowerCLI command will look like this:

Import-Csv "C:\boaz\NewVMs-LA01.csv" -UseCulture | %{
## Gets Customization info to set NIC to Static and assign static IP address
    Get-OSCustomizationSpec $_.Customization | Get-OSCustomizationNicMapping | `
## Sets the Static IP info
    Set-OSCustomizationNicMapping -IpMode UseStaticIP -IpAddress $_."IPAddress" `
        -SubnetMask $_.Subnetmask -DefaultGateway $_.DefaultGateway -Dns $_.DNS1,$_.DNS2
## Sets the name of the VMs OS 
    $cust = Get-OSCustomizationSpec -name Windows2008R2_profile 
    Set-OSCustomizationSpec -OSCustomizationSpec $cust -NamingScheme Fixed -NamingPrefix $_.VMName
## Creates the New VM from the template
    $vm=New-VM -name  $_."VMName" -Template $_.Template -Host $_."VMHost" `
        -Datastore $_.Datastore -OSCustomizationSpec $_.Customization `
        -Confirm:$false -RunAsync
}

You will of course need to create a customization profile for this Windows Server, in which you can put all the relevant information, including a Domain membership, license key (If you're not using a KMS server) and others.

Your CSV file looks like this below in this example: (click on it to see bigger)


You can download this CSV from here

Your PowerCLI unlike Linux will look like this:(sorry for the red patches, had to remove identifying information)


Thursday, June 11, 2015

Adding a Stand Alone ESXi host to Active Directory Authentication

If you're not using vCenter, or even if you are and your hosts aren't in lockdown mode, you may want to have authentication to the local ESXi hosts done through Active Directory.

These are the steps:

Pre-requisits:

Since you are adding to the domain, you need a name server to be able to resolve the Active Directoy domain controller or server that hosts the Master FSMO Role.
_ldap._tcp.dc._msdcs. DNSDomainName SRV resource record, which identifies the name of the domain controller that hosts the Active Directory domain. 

Go to DNS and Routing, and put the hostname,, domain (as it shows in AD) and the IP of the DNS server that can answer for the SRV Record for the domain. 






1.  Go to Configuration --> Authentication Services, and then to Properties.


2. Choose "Active Directory" from the pull-down menu, and then put in your domain name, and click "Join Domain"  it will then prompt you to put in credentials of a user that can add computers to the domain.



IMPORTANT:  Wait until this finishes, look for an event saying that it's "Join Windows domain" and wait for it to complete: See pic below, don't continue until this is done.




3.  Go to Configuration --> Advanced Services and to Config, and scroll down to Config.HostAgent.plugins.hostsvc.esxAdminsGroup and add an Active Directory group that you want to be Administrators on this box.  




4.  SSH into the box, create a directory /var/lock/subsys, then restart the following services as such:

~ # mkdir /var/lock/subsys
~ # /etc/init.d/netlogond restart; /etc/init.d/lwiod restart; /etc/init.d/lsassd restart;




5.  Now you should see the domain you added when you go to add a permission, as well as any trusts if you have that configured.



That's it.  You can now login into this ESXi with your domain\username and your AD password.  
However root/password will still work, so you may want to put a different password so no one that knew root before will access the ESXi host.


Friday, March 13, 2015

How to go into Single User mode when Password is needed in RedHat

How to go into Single User mode when Password is requested in RedHat


Sometimes you don't have the password when P2V'ing or just lost password, and adding a "Single" at the end of the kernel line doesn't work, and it still gives you a screen like this:


If you press Control-D, then the system just continues boot.

This is how you get past it:


Single User Mode when asked for root password for maintenance


1. Go to the VM Console (or Physcal Server console)
2. Reboot your machine; press 'Esc' repeatedly until you get to the GRUB menu; you will get something like this:


3. Press Enter, and select the kernel line (#2) 


3. press 'e' to edit; Edit the line to get rid of quiet and splash; change 'ro' to 'rw'; and add 'init=/bin/bash'. The line should look something like this:

grub edit> kernel /vmlinuz-2.6.32-220.el6.x86_64 root=/dev/mapper/vg_root-lv_root rw init=/bin/bash



Then press "Enter" and then "B" (to boot)  then it will give you a root prompt:


And that's it.  Of course you could also boot from a ISO file or a DVD, then mount the filesystem and change the password or whatever else you need to do, but this is quicker.

Tuesday, February 24, 2015

Converting Physical Servers to VM's and getting the "Unable to query the live linux source machine" error

The famous "Unable to query the live linux source machine" error.  

So you could be trying to convert a physical box that has interfaces that look like this:

[root@linux01 ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 78:2B:CB:11:71:B4  
          inet6 addr: fe80::7a2b:cbff:fe11:71b4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:571643146 errors:0 dropped:0 overruns:0 frame:0
          TX packets:260652914 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:734439978 (700.4 MiB)  TX bytes:679368267 (647.8 MiB)
          Interrupt:90 Memory:da000000-da012800 

eth0.28  Link encap:Ethernet  HWaddr 78:2B:CB:11:71:B4  
          inet addr:10.28.0.2  Bcast:10.225.255.255  Mask:255.255.0.0
          inet6 addr: fe80::7a2b:cbff:fe11:71b4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:493616754 errors:0 dropped:0 overruns:0 frame:0
          TX packets:260652917 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1212214321 (1.1 GiB)  TX bytes:2853259219 (2.6 GiB)

eth1      Link encap:Ethernet  HWaddr 78:2B:CB:11:71:B5  
          inet addr:10.23.1.2  Bcast:10.233.255.255  Mask:255.255.0.0
          inet6 addr: fe80::7a2b:cbff:fe11:71b5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:91354729 errors:0 dropped:0 overruns:0 frame:0
          TX packets:55239822 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:311977801 (297.5 MiB)  TX bytes:4190442061 (3.9 GiB)
          Interrupt:98 Memory:dc000000-dc012800 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:102371 errors:0 dropped:0 overruns:0 frame:0
          TX packets:102371 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:109111704 (104.0 MiB)  TX bytes:109111704 (104.0 MiB)


Problem is, when you try to convert them using the VMware Converter, you will  get this error:

"unable to query the live linux source machine"

unable to query the live linux source machine
In the example here it's RedHat but the concept works on many flavors.

Due to a programming issue in the VMware converter, the converter gets confused with this dot (.) in the interface name, and bombs out.  So you need to change the interface from something like ifcfg-eth0.28 to ifcfg-vlan28




Please do this using an iDRAC or iLO or console, as more likely than not, you will lose network connectivity over an SSH session....



Now to the details, you need the configuration inside the interface look like this:

VLAN=yes
VLAN_NAME_TYPE=VLAN_PLUS_VID_NO_PAD
DEVICE=vlan28
PHYSDEV=eth0
BOOTPROTO=static
ONBOOT=yes
TYPE=Ethernet
IPADDR=10.28.0.2
NETMASK=255.255.0.0

(note the VLAN_NAME_TYPE=VLAN_PLUS_VID_NO_PAD line, that's crucial, as the line in there may read VLAN_NAME_TYPE=DEV_PLUS_VID_NO_PAD and that won't work!!)

Now you need to remove the vlan that was already there, you can do it with vconfig:

#vconfig rem eth0.28
Removed VLAN -:eth0.28:-

Now issue a "service network restart" (or /etc/init.d/network restart

and you should see the new name (vlan28) in the interface list.  

You can see what vlans there are like this:

#ls /proc/net/vlan
config       vlan28

(this is after I restarted the network of course)  


[root@linux01 ~]# ifconfig 
---snip---
vlan28   Link encap:Ethernet  HWaddr 78:2B:CB:10:2B:6F  
          inet addr:10.28.0.2  Bcast:10.225.255.255  Mask:255.255.0.0
          inet6 addr: fe80::7a2b:cbff:fe10:2b6f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:26368 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3701 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2942976 (2.8 MiB)  TX bytes:496224 (484.5 KiB)





Wednesday, December 10, 2014

Allowing SSH to ESXi Servers with public/private key authentication


If you have a large number of ESXi hosts that you need to SSH to and they have various passwords and so on, (this is not super secure, so do at your own security assessment)

Just like you can do this on a Unix host, you can do the same for ESXi:

1.  Generate a Public/Private key on the linux host:

cd ~/.ssh
ssh-keygen -t rsa

This will create two files in ~/.ssh: id_rsa and id_rsa.pub.

In ESX 5.X,  the location of authorized_keys is: /etc/ssh/keys-<username>/authorized_keys

So you can do this:

scp /root/.ssh/id_rsa.pub remote-ESXi-host:/etc/ssh/keys-root/authorized_keys

Like this for example:

scp /root/.ssh/id_rsa.pub 192.168.3.102:/etc/ssh/keys-root/authorized_keys

Of course if you want to do this from more than one host, then just add to the authorized_keys file rather than overwriting it....