Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Recently, there was a blog entry posted on the AWS Developer Blog discussing how to migrate to boto3.  Since HPE Helion Eucalyptus strives to provide 100% AWS-compatible APIs for implemented services, AWS SDKs – such as the AWS SDK for Python – works solidly.  This blog entry will demonstrate how to use boto3 – the latest version of AWS SDK for Python – with HPE Helion Eucalyptus 4.2.

At the time of the posting of this blog entry, the following AWS service APIs are supported by HPE Helion Eucalyptus 4.2:

Installation

As mentioned on the boto3 documentation, install boto3 using pip:

# pip install boto3

Configuration

Again, as mentioned in the boto3 documentation, configuration can be done by using AWS CLI, or manually creating the config and credentials file under the .aws directory.  For example, here are the contents of the .aws/config and .aws/credentials files that will be used for this demonstration:

# cat .aws/config
[profile devops-admin]
output = json
region = us-east-1
# cat .aws/credentials
[devops-admin]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXX

If these files do not want to be used, as an alternative, you can pass the AWS Access Key ID and AWS Secret Key programmatically.  This will be referenced later on in this blog entry.

Using Boto3

To demonstrate how to use boto3, ipython will be utilized.  To get started, the Session class will be imported from the boto3 library:

# ipython
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from boto3.session import Session

Next invoke the session:

In [2]: session = Session(region_name='us-east-1', profile_name="devops-admin")

As mentioned earlier, alternatively, if we want to programmatically pass the AWS Access Key ID and the AWS Secret Key, it can be done when the session is invoked:

In [2]: session = Session(aws_access_key_id='XXXXXXXXXXXXXX', aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXX', region_name='us-east-1')

Even though region_name has a value here, when the client connection is created, the service endpoint will be a HPE Helion Eucalyptus service endpoint.  Any valid AWS region name can be used with HPE Helion Eucalyptus.  The important piece will be the endpoint URL.

From here, we can use the session to establish a client connection with a given HPE Helion Eucalyptus service endpoint.  Since the HPE Helion Eucalyptus cloud used in this example contains HTTPS endpoints, the trusted root certificate for the cloud subdomain will be passed as well.

Examples

Here is an example connecting to the EC2 service endpoint provided by the HPE Helion Eucalyptus Compute service to discover what instances as associated with the authenticated user account:

In [3]: client = session.client('ec2', endpoint_url='https://ec2.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [4]: for reservation in client.describe_instances()['Reservations']: 
  for instance in reservation['Instances']:
    print instance['InstanceId']
 ...:
i-4064f4e7
i-1c8515dd
i-79e96bc1
i-d43f50f1
i-b4adc06b
i-c4025e42

Below is another example connecting to the S3 service endpoint provided by the HPE Helion Eucalyptus Object Storage Gateway (OSG) service to list the buckets owned by the authenticated user account:

In [5]: client = session.client('s3', endpoint_url='https://s3.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [6]: for bucket in client.list_buckets()['Buckets']: 
  print bucket['Name']
 ...:
cfn-templates
ubuntu-trusty-x86_64-hvm-20151218
ubuntu-xenial-x86_64-hvm-20151217

Another example connecting to the Cloudformation service endpoint provided by the HPE Helion Eucalyptus Cloudformation service:

In [7]: client = session.client('cloudformation', endpoint_url='https://cloudformation.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [8]: for stack in client.describe_stacks()['Stacks']:
 print "Stack Name: " + stack['StackName']
 print "Status: " + stack['StackStatus']
 print "ID: " + stack['StackId']
 ...:
Stack Name: CoreOSCluster
Status: CREATE_COMPLETE
ID: arn:aws:cloudformation::001520216600:stack/CoreOSCluster/12437fe7-8a03-4920-9e34-270764450fa0

And for the last example, connecting to the AutoScaling service endpoint provided by the HPE Helion Eucalyptus AutoScaling service:

In [9]: client = session.client('autoscaling', endpoint_url='https://autoscaling.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [10]: for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
 print "AutoScaling Group Name: " + asg['AutoScalingGroupName']
 print "Launch Config: " + asg['LaunchConfigurationName']
 print "Availability Zones:"
 for az in asg['AvailabilityZones']:
 print "\t" + az
 print "AutoScaling Group Instances:"
 for instance in asg['Instances']:
 print "\t" + instance['InstanceId']
 ....:
AutoScaling Group Name: CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI
Launch Config: CoreOSCluster-CoreOsLaunchConfig-LAWHOT5X5K5PX
Availability Zones:
 us-east-1c
 us-east-1b
 us-east-1a
AutoScaling Group Instances:
 i-79e96bc1
 i-4064f4e7
 i-c4025e42
 i-d43f50f1
 i-1c8515dd

Conclusion

As mentioned earlier, boto3 can be used with any AWS compatible service implemented by HPE Helion Eucalyptus.  If your team isn’t ready to use boto3 yet, boto can still be used with HPE Helion Eucalyptus.

As always, I hope you enjoyed this entry.  Please let me know if there are any questions/suggestion/ideas regarding this blog topic.

Enjoy!

 

Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

In 2014, I created a series of blog posts that have discussed using CoreOS on Eucalyptus cloud infrastructures.  This blog post is an updated version of the entry which discussed how to deploy a CoreOS cluster using a cloudformation template on Eucalyptus 4.0.1.  It will cover how to deploy a CoreOS cluster using Cloudformation on a HPE Helion Eucalyptus 4.2 VPC environment.

In HPE Helion Eucalyptus 4.1, VPC (Virtual Private Cloud) was in technical preview state.  With the release of Eucalyptus 4.2, VPC was upgraded to stable release.  HPE Helion Eucalyptus VPC provides similar features as AWS VPC.  For more information about what is currently supported in Eucalyptus VPC, please refer to the online documentation.

Prerequisites

Prerequisites for this blog entry are listed in the following previous blogs:

Please note the information regarding HPE Helion Eucalyptus IAM and how to obtain the CoreOS Beta AMI image in the previous listed blog entries.

CoreOS ETCD Discovery Service Token

When setting up the CoreOS cluster, the method used to handle cluster membership is using etcd Discovery.  This provides a unique discovery URL that will show all the members of the cluster.  To obtain a token for the size of the cluster you desire, use the following URL and add the value for the size of the cluster.  For example, if the cluster will have five members, using curl – the request URL will look like the following:

curl https://discovery.etcd.io/new?size=5

The value returned will look similar to the following:

https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed

This URL can be referenced to see if all the members of the cluster registered successfully.

Deploying the Cluster on HPE Helion Eucalyptus VPC

When deploying the cluster on a Eucalyptus VPC environment, there are additional variables that have to be taken into account.  To download the example template, use the following URL:

https://s3-us-west-1.amazonaws.com/cfn-coreos-deployment/cfn-coreos-as-vpc.json

After downloading the template, use either euca2ools or AWS CLI to validate the template.  This will display the arguments that need to be passed when creating the cloudformation stack on Eucalyptus.  For example:

# euform-validate-template --template-file cfn-coreos-as.json 
DESCRIPTION Deploy CoreOS Cluster on Eucalyptus VPC
PARAMETER VpcId false VpcId of your existing Virtual Private Cloud (VPC)
PARAMETER Subnets false The list of SubnetIds in your Virtual Private Cloud (VPC)
PARAMETER AZs false The list of AvailabilityZones for your Virtual Private Cloud (VPC)
PARAMETER CoreOSImageId false CoreOS Image Id
PARAMETER UserKeyPair true User Key Pair
PARAMETER ClusterSize false Desired CoreOS Cluster Size
PARAMETER VmType false Desired VM Type for Instances

Notice the template requires unique variables associated with HPE Helion Eucalyptus VPC.

Now that the template has been downloaded, create the CoreOS stack using euca2ools.  For example:

# euform-create-stack CoreOSCluster --template-file cfn-coreos-as.json --parameter Subnets=subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c --parameter AZs=euca-east-1c,euca-east-1b,euca-east-1a --parameter CoreOSImageId=emi-dfa27782 --parameter UserKeyPair=devops-admin --parameter ClusterSize=5 --parameter VmType=m1.large --parameter VpcId=vpc-d7fcff27

Once the cluster has been deployed, confirm that the cloudformation stack deployed successfully:

# euform-describe-stacks
STACK CoreOSCluster CREATE_COMPLETE Complete! Deploy CoreOS Cluster on Eucalyptus VPC 2016-01-01T21:09:10.965Z
PARAMETER VpcId vpc-d7fcff27
PARAMETER Subnets subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c
PARAMETER AZs euca-east-1c,euca-east-1b,euca-east-1a
PARAMETER CoreOSImageId emi-dfa27782
PARAMETER UserKeyPair ****
PARAMETER ClusterSize 5
PARAMETER VmType m1.large
OUTPUT AutoScalingGroup CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI

Check the discovery URL using curl, wget or any browser to confirm that the cluster membership completed:

# curl https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed
{"action":"get","node":{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed","dir":true,"nodes":[{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d0a4c6d73d0d8d17","value":"8981923b54d7d7f46fabc527936a7dcf=http://172.31.4.17:2380","modifiedIndex":953833155,"createdIndex":953833155},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/12b6e6e78c9cb70c","value":"33a3209006d2be1d5be0da6eaea007c5=http://172.31.19.215:2380","modifiedIndex":953833156,"createdIndex":953833156},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d5c5d93e360ba87","value":"e71b1fefcd65c43a0fbacc7103efbc2b=http://172.31.22.157:2380","modifiedIndex":953833162,"createdIndex":953833162},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/cffd4985c990f872","value":"f047b9ff24f3d0c4e74c660709103b36=http://172.31.6.166:2380","modifiedIndex":953833167,"createdIndex":953833167},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/8e6ccfef42f98260","value":"c48b163558b61733c1aa44dccb712406=http://172.31.47.175:2380","modifiedIndex":953833339,"createdIndex":953833339}],"modifiedIndex":953831075,"createdIndex":953831075}}

To confirm the health of the cluster, SSH into one of the cluster nodes, and use fleetctl and etcdctl:

# ssh -i devops-admin-key core@euca-10-116-131-230.eucalyptus.c-05.autoqa.qa1.eucalyptus-systems.com
Last login: Sat Jan 2 23:53:25 2016 from 10.111.1.71
CoreOS beta (877.1.0)
core@euca-172-31-22-157 ~ $ fleetctl list-machines
MACHINE IP METADATA
33a32090... 10.116.131.107 purpose=coreos-cluster,region=euca-us-east-1
8981923b... 10.116.131.121 purpose=coreos-cluster,region=euca-us-east-1
c48b1635... 10.116.131.213 purpose=coreos-cluster,region=euca-us-east-1
e71b1fef... 10.116.131.230 purpose=coreos-cluster,region=euca-us-east-1
f047b9ff... 10.116.131.197 purpose=coreos-cluster,region=euca-us-east-1
core@euca-172-31-22-157 ~ $ etcd
etcd etcd2 etcdctl
core@euca-172-31-22-157 ~ $ etcdctl cluster-health
member d5c5d93e360ba87 is healthy: got healthy result from http://10.116.131.230:2379
member 12b6e6e78c9cb70c is healthy: got healthy result from http://10.116.131.107:2379
member 8e6ccfef42f98260 is healthy: got healthy result from http://10.116.131.213:2379
member cffd4985c990f872 is healthy: got healthy result from http://10.116.131.197:2379
member d0a4c6d73d0d8d17 is healthy: got healthy result from http://10.116.131.121:2379
cluster is healthy
core@euca-172-31-22-157 ~ $ etcdctl member list
d5c5d93e360ba87: name=e71b1fefcd65c43a0fbacc7103efbc2b peerURLs=http://172.31.22.157:2380 clientURLs=http://10.116.131.230:2379
12b6e6e78c9cb70c: name=33a3209006d2be1d5be0da6eaea007c5 peerURLs=http://172.31.19.215:2380 clientURLs=http://10.116.131.107:2379
8e6ccfef42f98260: name=c48b163558b61733c1aa44dccb712406 peerURLs=http://172.31.47.175:2380 clientURLs=http://10.116.131.213:2379
cffd4985c990f872: name=f047b9ff24f3d0c4e74c660709103b36 peerURLs=http://172.31.6.166:2380 clientURLs=http://10.116.131.197:2379
d0a4c6d73d0d8d17: name=8981923b54d7d7f46fabc527936a7dcf peerURLs=http://172.31.4.17:2380 clientURLs=http://10.116.131.121:2379

Thats it! The CoreOS cluster has been successfully deployed.  Given HPE Helion Eucalyptus’s AWS compatibility, this template can be used on AWS as well.

As always, please let me know if there are any questions.  Enjoy!

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

Using Boto’s connect_to_region function with Eucalyptus 4.1

Typically, when documenting how to use the Python interface boto with Eucalyptus clouds, the function connect_<service> is always leveraged (<service> being ec2, s3, iam, sts, elb, cloudformation, cloudwatch, or autoscale).  The focus of this blog entry is to demonstrate how to use the connect_to_region function associated with each AWS service supported by Eucalyptus.  This will be very important when using boto with federated Eucalyptus clouds, which will be available in the release of HP Helion Eucalyptus 4.2.0.

Prerequisites

To get started, the following requirements need to be met:

The Setup

Once the prerequisites are met, now we can focus on setting up the configuration file for boto.  Below is an example of a boto configuration file:

[Credentials]
aws_access_key_id = <Eucalyptus Access Key ID>
aws_secret_access_key = <Eucalyptus Secret Access Key>
[Boto]
is_secure = False
endpoints_path = /root/boto-qa-setup1-endpoints.json

Notice the file boto-qa-setup1-endpoints.json.  This is a JSON file that contains the Eucalyptus service API endpoints correlating to the Eucalyptus cloud that would like to be accessed.  Here are the contents of the boto-qa-setup1-endpoints.json file:

{
 "autoscaling": {
 "eucalyptus": "autoscaling.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "cloudformation": {
 "eucalyptus": "cloudformation.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "cloudwatch": {
 "eucalyptus": "cloudwatch.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "ec2": {
 "eucalyptus": "compute.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "elasticloadbalancing": {
 "eucalyptus": "loadbalancing.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "iam": {
 "eucalyptus": "euare.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "s3": {
 "eucalyptus": "objectstorage.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "sts": {
 "eucalyptus": "tokens.h-01.autoqa.qa1.eucalyptus-systems.com"
 },
 "swf": {
 "eucalyptus": "simpleworkflow.h-01.autoqa.qa1.eucalyptus-systems.com"
 }
}

As you can see, each AWS service implemented by Eucalyptus is defined in this file.  With the boto configuration file and endpoints json file defined, the connect_to_region function in boto can be easily utilized.

Example

To show how this setup works, ipython will be used as the demonstration environment.  Below is an example that shows how to use the connect_to_region function with the Compute service (EC2) against a Eucalyptus 4.1 cloud.

# ipython
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import boto.ec2
In [2]: ec2_connection = boto.ec2.connect_to_region('eucalyptus', port=8773)
In [3]: ec2_connection.get_all_instance_types()
Out[3]:
[InstanceType:m1.small-1,256,5,
 InstanceType:t1.micro-1,256,5,
 InstanceType:m1.medium-1,512,10,
 InstanceType:c1.medium-2,512,10,
 InstanceType:m1.large-2,512,10,
 InstanceType:m1.xlarge-2,1024,10,
 InstanceType:c1.xlarge-2,2048,10,
 InstanceType:m2.xlarge-2,2048,10,
 InstanceType:m3.xlarge-4,2048,15,
 InstanceType:m2.2xlarge-2,4096,30,
 InstanceType:m3.2xlarge-4,4096,30,
 InstanceType:cc1.4xlarge-8,3072,60,
 InstanceType:m2.4xlarge-8,4096,60,
 InstanceType:hi1.4xlarge-8,6144,120,
 InstanceType:cc2.8xlarge-16,6144,120,
 InstanceType:cg1.4xlarge-16,12288,200,
 InstanceType:cr1.8xlarge-16,16384,240,
 InstanceType:hs1.8xlarge-48,119808,24000]

Here is an example of describing the volumes and snapshots on a given Eucalyptus cloud:

In [3]: ec2_connection.get_all_volumes()
Out[3]:
[Volume:vol-563802cd,
 Volume:vol-274c3628,
 Volume:vol-7074a3a8,
 Volume:vol-172cdb42,
 Volume:vol-d00e53e9,
 Volume:vol-4f370899]
In [4]: ec2_connection.get_all_snapshots()
Out[4]:
[Snapshot:snap-6d874d5a,
 Snapshot:snap-e41c6adc,
 Snapshot:snap-63700417,
 Snapshot:snap-6055a378,
 Snapshot:snap-91d70d6b,
 Snapshot:snap-7b39eca8,
 Snapshot:snap-80a4f3e2]

Just like the EC2 tutorial by boto demonstrates how to use connect_to_region with AWS EC2, this function can also be used against Eucalyptus 4.1 clouds as well.  As mentioned earlier, when region support (multiple Eucalyptus clouds) becomes available in Eucalyptus 4.2, this function will be very useful.

Enjoy!

Using Boto’s connect_to_region function with Eucalyptus 4.1

Using AWS CodeDeploy with Eucalyptus Cloudformation for On-Premise Application Deployments

Background

Recently, Amazon Web Services (AWS) announced that their CodeDeploy service supports on-premise instances.  This is extremely valuable – especially for developers and administrators to allow utilization of existing on-premise resources.

For teams who are using HP Helion Eucalyptus 4.1 (or who want to use Eucalyptus), this is even better news.  This feature – along with HP Helion Eucalyptus 4.1 Cloudformation – developers can deploy applications within a private cloud environment of HP Helion Eucalyptus.  This makes it even easier for developers and administrators to separate out and maintain production (AWS) versus development (HP Helion Eucalyptus) environments (or vice versa).  In addition, since HP Helion Eucalyptus strives for AWS compatibility, the Cloudformation templates used on Eucalyptus, can be used with AWS – with just a couple of modifications.

The Setup

To leverage on-premise instances with AWS CodeDeploy, please reference the AWS documentation entitled “Configure Existing On-Premises Instances by Using AWS CodeDeploy“.  To use these steps with an HP Helion Eucalyptus cloud, a slight change had to be done to the AWS CLI tools.  When using the ‘aws deploy register’ command, AWS CLI checks to see if the instance is running on an AWS environment by confirm if the instance metadata is present.  For on-premise cloud environments that provide the same service, this will cause the on-premise instance registration to fail.  To resolve this issue, I updated the AWS CLI tools with a patch that checks the instance metadata variable ‘AMI ID’ – which on AWS will begin with ‘ami’.  All images on Eucalyptus start with ’emi’ (i.e. Eucalyptus Machine Images).  With this patch, on-premise instance registration completes without a problem.

In addition to the patch, the following is needed on HP Helion Eucalyptus 4.1 cloud environments:

  1. Ubuntu Server 14.04 LTS EMI (EBS-backed or Instance Store-Backed)
  2. Eucalyptus IAM access policy actions that allow the user to use CloudFormations, AutoScaling and EC2 actions.  (Along with the Eucalyptus documentation, reference the AWS IAM documentation as well.)

Once these requirements have been met on the HP Helion Eucalyptus 4.1 environment, developers can use their AWS credentials in the Eucalyptus Cloudformation templates to leverage the on-premise instances with AWS CodeDeploy.

Using Eucalyptus Cloudformation For Instance Deployment

To help get started, I provided the following example Cloudformation templates:

Each template has specific parameters that need values.  The key parameters are the following:

  • UserKeyPair -> Eucalyptus EC2 Key Pair
  • UbuntuImageId -> Ubuntu 14.04 Cloud Image (EMI)
  • SSHLocation -> IP address range that can SSH into the Eucalyptus instances

Once there are values for these parameters, the Cloudformation templates can be utilized to deploy the on-premise instances.

Configure Existing On-Premises Instances by Using AWS CodeDeploy

After the AWS IAM prerequisites have been met for AWS CodeDeploy, use the example Cloudformation templates with HP Helion Eucalyptus.  Below is an example output of both templates being used on a given HP Helion Eucalyptus 4.1 cloud:

# euform-describe-stacks --region account2-admin@eucalyptus-cloud
STACK UbuntuCodeDeployTest CREATE_COMPLETE Complete! Eucalyptus Cloudformation Example => Deploy an instance that is configured and registered as an on-premise instance with AWS CodeDeploy 2015-04-14T02:42:01.325Z
PARAMETER UbuntuImageId emi-759e12a3
PARAMETER UserKeyPair account2-admin
OUTPUT InstanceId i-df9af6f5
OUTPUT AZ thugmotivation101
OUTPUT PublicIP 10.111.75.103
STACK UbuntuCodeDeployAutoScalingTest CREATE_COMPLETE Complete! Eucalyptus CloudFormation Sample Template AutoScaling-Single AZ for AWS CodeDeploy on-premise instances. The autoscaling group is configured to span in one availability zone (one cluster) and is Auto-Scaled based on the CPU utilization of the servers. In addition, each instance will be registered as an on-premise instance with AWS CodeDeploy. Please refer to http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-configure-on-premises-host.html for additional information. 2015-04-14T02:41:44.733Z
PARAMETER InstanceType m1.xlarge
PARAMETER UbuntuImageId emi-759e12a3
PARAMETER UserKeyPair account2-admin
PARAMETER MinSize 2
PARAMETER MaxSize 4
PARAMETER Zone theinspiration
OUTPUT AutoScalingGroup UbuntuCodeDeployAutoScalingTest-ServerGroup-211FTERKLII6T

Since both Eucalyptus Cloudformation stacks have successfully deployed, let’s check out the instances:

# euca-describe-instances --region account2-admin@eucalyptus-cloud
RESERVATION r-feeb1023 968367465792 UbuntuCodeDeployTest-CodeDeploySecurityGroup-HP5L5HRU3WI98
INSTANCE i-df9af6f5 emi-759e12a3 euca-10-111-75-103.eucalyptus.a-35.autoqa.qa1.eucalyptus-systems.com euca-10-111-75-107.eucalyptus.internal running account2-admin 0 m1.xlarge 2015-04-14T02:42:11.346Z thugmotivation101 monitoring-disabled 10.111.75.103 10.111.75.107 instance-store hvm sg-422ed69a x86_64
TAG instance i-df9af6f5 aws:cloudformation:logical-id CodeDeployInstance
TAG instance i-df9af6f5 aws:cloudformation:stack-id arn:aws:cloudformation::968367465792:stack/UbuntuCodeDeployTest/b210c81a-7e34-476f-9c59-7ea69ac9647a
TAG instance i-df9af6f5 aws:cloudformation:stack-name UbuntuCodeDeployTest
RESERVATION r-10df526e 968367465792 UbuntuCodeDeployAutoScalingTest-InstanceSecurityGroup-B2OVH0XWAFN5S
INSTANCE i-9b2b14e3 emi-759e12a3 euca-10-111-75-97.eucalyptus.a-35.autoqa.qa1.eucalyptus-systems.com euca-10-111-75-106.eucalyptus.internal running account2-admin 0 m1.xlarge 2015-04-14T02:42:05.939Z theinspiration monitoring-enabled 10.111.75.97 10.111.75.106 instance-store hvm d739a9eb-ba3c-4f16-940c-366a516cebfe_theinspiration_1 sg-556b10ce x86_64
TAG instance i-9b2b14e3 Name UbuntuCodeDeployAutoScalingTest
TAG instance i-9b2b14e3 aws:autoscaling:groupName UbuntuCodeDeployAutoScalingTest-ServerGroup-211FTERKLII6T
TAG instance i-9b2b14e3 aws:cloudformation:logical-id ServerGroup
TAG instance i-9b2b14e3 aws:cloudformation:stack-id arn:aws:cloudformation::968367465792:stack/UbuntuCodeDeployAutoScalingTest/2a5aefc6-c5c3-41e8-a9b4-a9ca095c1696
TAG instance i-9b2b14e3 aws:cloudformation:stack-name UbuntuCodeDeployAutoScalingTest
RESERVATION r-6c8a9642 968367465792 UbuntuCodeDeployAutoScalingTest-InstanceSecurityGroup-B2OVH0XWAFN5S
INSTANCE i-12f1a3a3 emi-759e12a3 euca-10-111-75-101.eucalyptus.a-35.autoqa.qa1.eucalyptus-systems.com euca-10-111-75-111.eucalyptus.internal running account2-admin 0 m1.xlarge 2015-04-14T02:42:05.872Z theinspiration monitoring-enabled 10.111.75.101 10.111.75.111 instance-store hvm 16a61ee7-d143-4f08-b926-c711ce335a1a_theinspiration_1 sg-556b10ce x86_64
TAG instance i-12f1a3a3 Name UbuntuCodeDeployAutoScalingTest
TAG instance i-12f1a3a3 aws:autoscaling:groupName UbuntuCodeDeployAutoScalingTest-ServerGroup-211FTERKLII6T
TAG instance i-12f1a3a3 aws:cloudformation:logical-id ServerGroup
TAG instance i-12f1a3a3 aws:cloudformation:stack-id arn:aws:cloudformation::968367465792:stack/UbuntuCodeDeployAutoScalingTest/2a5aefc6-c5c3-41e8-a9b4-a9ca095c1696
TAG instance i-12f1a3a3 aws:cloudformation:stack-name UbuntuCodeDeployAutoScalingTest

As we can see above, the Eucalyptus Cloudformation instances are tagged just as if they were running on AWS – again demonstrating the AWS compatibility desired by HP Helion Eucalyptus.

Now, look in the AWS Management Console, under the AWS CodeDeploy service.  In the dropbox under ‘AWS CodeDeploy’, select ‘On-Premise Instances’:

Displaying the dropdown box options under the AWS CodeDeploy title
Displaying the dropdown box options under AWS CodeDeploy

Once that has been selected, the on-premise instances running on HP Helion Eucalyptus should show up as ‘Registered’:

Display of Registered On-Premise Instances for AWS CodeDeploy
Display of Registered On-Premise Instances for AWS CodeDeploy

Now developers can proceed with remaining steps of using AWS CodeDeploy to do an application deployment.

Conclusion

As demonstrated, the new feature in AWS CodeDeploy allows developers to gain a true sense of a hybrid cloud environment.  This feature – along with HP Helion Eucalyptus’s AWS compatibility – makes it easy for developers and administrators to use the same toolset to deploy, manage and maintain both public and private cloud environments.  Don’t forget – using AWS CodeDeploy with on-premise instances does have an AWS pricing cost associated with it.  Check out AWS CodeDeploy Pricing for more details.

Enjoy!

Using AWS CodeDeploy with Eucalyptus Cloudformation for On-Premise Application Deployments

Adding Eucalyptus Load Balancer Access Logging for Eucalyptus Cloud Users

Preface

Eucalyptus continues to strive as the best on-premise AWS-compatible Infrastructure as a Service (IaaS).  One of the great things about Eucalyptus being an open source platform, is that if there is an AWS feature that any cloud administrator/developer wants to add, they have the ability to do it.  This blog entry will cover how to enable cloud users to have access to the Eucalyptus Load Balancer access logs – similar to how this is accomplished with Amazon Web Services Elastic Load Balancer service.

Before we dive in, I would like to give special thanks to the following members of the Eucalyptus Engineering Team.  Without their hard work, this blog would not be possible:

Special thanks to these individuals for their continued contributions to the Eucalyptus software.

Overview

Currently, when a cloud user launches a Eucalyptus Load Balancer, they will see something similar to the following:

# eulb-create-lb hasp-euca-lb --listener "lb-port=80, protocol=http, instance-port=8888, instance-protocol=http" --availability-zone Honest
# eulb-describe-lbs
LOAD_BALANCER hasp-euca-elb hasp-euca-elb-325271821652.eulb.future.euca-hasp.cs.prc.eucalyptus-systems.com 2014-12-11T23:34:35.397Z

Notice the DNS name of the load balancer.  It has the following format:

{load balancer name}-{Account ID}.{Load Balancer DNS Subdomain}.{Eucalyptus Cloud DNS Domain}

The “{load balancer name}-{Account ID}” string is the important information in this value.

From the cloud administrator’s perspective, the load balancer is an AutoScaling group.  More information can be found in the following resources:

If the cloud administrator describes the instances running under the ‘eucalyptus‘ account and the load balancer above is running, the following would be displayed:

# euca-describe-instances 
RESERVATION r-278c161e 094999295155 euca-internal-325271821652-hasp-euca-elb
INSTANCE i-135b4b0a emi-7a4367b8 euca-10-104-7-21.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-156-121.future.internal running euca-elb 0 c1.medium 2014-12-11T23:34:44.428Z Honest monitoring-enabled 10.104.7.21 172.17.156.121 instance-store hvm c4946e25-64ed-4453-808c-9ff2ab831b47_Honest_1 sg-da911c98 arn:aws:iam::094999295155:instance-profile/internal/loadbalancer/loadbalancer-vm-325271821652-hasp-euca-elb x86_64
TAG instance i-135b4b0a Name loadbalancer-resources
TAG instance i-135b4b0a aws:autoscaling:groupName asg-euca-internal-elb-325271821652-hasp-euca-elb
TAG instance i-135b4b0a euca:node 10.104.1.218

Notice the ‘RESERVATION’ line that contains the security group that the instance is using.  If the ‘euca-internal-‘ prefix is removed, the security group has the following format:

{Account ID}-{load balancer Name}

This information matches the Load Balancer launched by the cloud user and will be the base for the solution.

Building the Foundation

In order to get started, the solution needs to be applied from the Cloud Administrator (i.e. admin user in the ‘eucalyptus’ account) perspective.  This solution can not be applied by any other type of cloud user.  In addition to cloud administrator user requirement, the following is needed:

Once these requirements are met, the environment is ready to go.

Create ELB Access Log User

A user (e.g. ‘elb-osg-logger’) needs to be created under the ‘eucalyptus’ account which will be used with the custom python script to store the load balancer access logs to the OSG bucket.  To create the user, after sourcing the cloud administrator credentials, use euare-usercreate:

# euare-usercreate -u elb-osg-logger -k 
AKILXXXXXXXXXXXXXX
PS6nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Store these credentials in a safe place.  Next, customizing the load balancer instance.

Customize the Load Balancer

To begin, a Eucalyptus Load Balancer needs to be launched in order to modify it.  The goal here is to build an image from this instance using euca-bundle-instance.  We will start with the load balancer mentioned earlier:

# euca-describe-instances 
RESERVATION r-5e1d4d17 094999295155 euca-internal-325271821652-hasp-euca-lb
INSTANCE i-315dd646 emi-7a4367b8 euca-10-104-7-9.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-177-235.future.internal running euca-elb 0 c1.medium 2014-12-11T04:23:04.441Z Honest monitoring-enabled 10.104.7.9 172.17.177.235 instance-store hvm b134a0bc-cfc4-4c6e-84ba-4fd1df160407_Honest_1 sg-b6cc605e arn:aws:iam::094999295155:instance-profile/internal/loadbalancer/loadbalancer-vm-325271821652-hasp-euca-lb x86_64
TAG instance i-315dd646 Name loadbalancer-resources
TAG instance i-315dd646 aws:autoscaling:groupName asg-euca-internal-elb-325271821652-hasp-euca-lb
TAG instance i-315dd646 euca:node 10.104.1.218

To access the load balancer, authorize SSH to the instance:

# euca-authorize -P tcp -p 22 euca-internal-325271821652-hasp-euca-elb

Next, SSH into the ELB instance:

# ssh -i euca-elb.priv root@euca-10-104-7-9.future.future.euca-hasp.cs.prc.eucalyptus-systems.com

Once inside the instance, install the EPEL package repository:

# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y

After the package has been installed, use yum to install the python-pip package:

# yum install python-pip -y

Next, use pip to upgrade and install the ‘boto‘ and ‘argparse‘ modules:

# pip install --upgrade boto argparse

Now, its time to add the custom python script.

Add Access Logs Script

The Access Log Script performs the following actions:

  • Creates a bucket with READ bucket ACL for Account  ID which launches the Eucalyptus Load Balancer
    • bucket created with the following format – s3://access_logs-{LB name}_{public-IPV4 of LB}_{LB instance numeric ID}
  • Places a copy of “/var/log/load-balancer-access.log.1” in the bucket with READ object ACL for Account ID which owns the Eucalyptus Load Balancer
    • the file in the bucket will have the following naming format – elb-access-{timestamp DDMMYY-HourMinSec}.log
  • Bonus – since Eucalyptus 4.0.0, OSG has supported object lifecycle management.  If lifecycle value is passed and its greater than 0, the object lifecycle is applied to all objects in the bucket.

To add the script to the instance, use curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/access-log-transfer-s3.py -o access-log-transfer-s3.py

Once the script has been downloaded, edit the script and add the ‘elb-osg-logger’ user credentials, the S3_URL and EC2_URL to the script in the following locations:

 EC2Connection.DefaultRegionEndpoint = '<EC2_URL - Eucalyptus Cloud Compute API DNS Name>'
 ec2conn = EC2Connection(aws_access_key_id="<elb-osg-logger user Access Key ID>",
 aws_secret_access_key="<elb-osg-logger user Secret Access Key>",
 is_secure=False, port="8773")
 s3 = S3Connection(aws_access_key_id="<elb-osg-logger user Access Key ID>",
 aws_secret_access_key="<elb-osg-logger user Secret Access Key>",
 host="<S3_URL - Eucalyptus Cloud OSG API DNS Name>",
 is_secure=False, port=8773, calling_format=OrdinaryCallingFormat())

Set the script to be executable using chmod:

# chmod a+x /root/access-log-transfer-s3.py

Now its time to configure HAProxy to log information.

Enable HAProxy Logging

The Eucalyptus Load Balancer uses haproxy to perform load balancing.  To enable logging, the following files need to be edited:

  • /etc/load-balancer-servo/haproxy_template.conf 
    • under the ‘global’ section add – log 127.0.0.1 local3 info
    • under the ‘default’ section add – log global
  • /usr/lib/python2.6/site-packages/servo/haproxy/haproxy_conf.py
    • change the following section:
 if protocol == 'http' or protocol == 'https':
 self.__content_map[section_name].append('log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt') 
 elif protocol == 'tcp' or protocol == 'ssl':
 self.__content_map[section_name].append('log-format tcplog\ %f\ %b\ %s\ %ts\ %Tw\ %Tc\ %Tt')

to

if protocol == 'http' or protocol == 'https':
 self.__content_map[section_name].append('log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt\ %{+Q}r\ %ci:%cp\ %fi:%fp\ %si:%sp\ req_size=%U\ resp_size=%B')
 elif protocol == 'tcp' or protocol == 'ssl':
 self.__content_map[section_name].append('log-format tcplog\ %f\ %b\ %s\ %ts\ %Tw\ %Tc\ %Tt\ %{+Q}r\ %ci:%cp\ %fi:%fp\ %si:%sp\ req_size=%U\ resp_size=%B')

For more information about the log-format in HAProxy, reference the HAProxy documentation on log format. The information that can be logged is highly customizable.  Reference the AWS ELB documentation regarding Access Log Entries to get a better sense of the logging experience on AWS.

Logging for HAProxy is complete.  Next, rsyslog and logrotate need to be configured.

Log Management

Storing the HAProxy logs, and rotating them is very important to this solution.  The script takes the rotated log, and stores it in the OSG bucket for the access logs.  The purpose of this is to make sure the file is not being written to when its being sent to the OSG bucket.  To start out, download the load-balancer.conf file to use with logrotate using curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/load-balancer.conf -o load-balancer.conf

This is the logrotate configuration file that the cronjob script will call to rotate the log file, then execute the access-log-transfer-s3.py script with a 1 day object lifecycle. To change the lifecycle, just change the value of the –lifecycle option in the load-balancer.conf file.

Next, update rsyslog to make sure the latest is running on the instance:

# yum upgrade rsyslog -y

After this has completed, add the following to the /etc/rsyslog.d/load-balancer.conf file:

local3.*       /var/log/load-balancer-access.log

Follow this step up by uncommenting and adding the following lines in /etc/rsyslog.conf:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514

To wrap up, we need to add a script that will be kicked off by the cronjob.

Cronjob Script

To kick off the log rotation, add the ‘elb-logrotate‘ script to the instance using curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/elb-logrotate -o elb-logrotate

Using ‘crontab -e’, set up a cron for 5 minutes (or however often the access log information would like to be uploaded to the bucket):

*/5 * * * * /root/elb-logrotate

Clean Up

After completing all the customizations, the instance needs to be prepared for bundling.  Run the following commands to prepare the instance:

# echo "" > /etc/udev/rules.d/70-persistent-net.rules
# echo "" > /lib/udev/rules.d/75-persistent-net-generator.rules

If PERSISTENT_DHCLIENT is not in the  /etc/sysconfig/network-scripts/ifcfg-eth0 file, then add it:

# grep PERSISTENT_DHCLIENT /etc/sysconfig/network-scripts/ifcfg-eth0
# echo "PERSISTENT_DHCLIENT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0

Now we can exit out the instance.

Creating the New Eucalyptus Load Balancer EMI

After finishing with the instance customizations, the instance is ready to be bundled and registered.  First, use euca-bundle-instance to bundle and upload the instance.  Use euca-describe-bundle-tasks to check on the status of the bundling operation.  Once the bundling operation has been completed, use euca-register to register the new ELB EMI:

# euca-bundle-instance -b load-balancer-access-logs -p eucalyptus-load-balancer-image-access-log i-315dd646
BUNDLE bun-315dd646 i-315dd646 load-balancer-access-logs eucalyptus-load-balancer-image-access-log 2014-12-11T04:07:59.835Z 2014-12-11T04:07:59.835Z pending 0 load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml
....
# euca-describe-bundle-tasks
BUNDLE bun-315dd646 i-315dd646 load-balancer-access-logs eucalyptus-load-balancer-image-access-log 2014-12-11T04:07:59.835Z 2014-12-11T04:09:57.671Z complete 0 load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml
# euca-register -a x86_64 -n load-balancer-access-logs load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml --virtualization-type hvm
IMAGE emi-7a4367b8

Now that the new Eucalyptus Load Balancer EMI is register, update the cloud property ‘loadbalancing.loadbalancer_emi‘ to display the new ELB EMI:

# euca-modify-property -p loadbalancing.loadbalancer_emi=emi-7a4367b8
PROPERTY loadbalancing.loadbalancer_emi emi-7a4367b8 was emi-cf4fb988

Now, lets test out the changes.

Testing Out the ELB with Access Logging

To test it out, you can use either the Cloud Administrator, or a user from a ‘non-eucalyptus’ account.  In the example below, a user from a ‘non-eucalyptus’ account was used.  If a ‘non-eucalyptus’ account user is used, make sure the user has the appropriate IAM access policies for EC2 (Compute), S3 (OSG), and ELB (Eucalyptus Load Balancer).

First, create the Eucalyptus Load Balancer:

# eulb-create-lb hasp-euca-lb --listener "lb-port=80, protocol=http, instance-port=80, instance-protocol=http" --availability-zone Honest --region account2-user11@

Next, launch an instance that has a web service running on port 80.  In this example, I used a cloud-init configuration file to install nginx on an Ubuntu 14.04 (Trusty Tahr) Cloud Image:

# euca-run-instances -k account2-user11 -t m1.medium emi-59a742d0 --user-data-file nginx-cloudinit.config --region account2-user11@
....
# euca-describe-instances --region account2-user11@
RESERVATION r-5c16c716 325271821652 default
INSTANCE i-45c1ebd1 emi-59a742d0 euca-10-104-7-29.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-189.future.internal running account2-user11 0 m1.medium 2014-12-05T21:53:51.197Z Honest monitoring-disabled 10.104.7.29 172.17.248.189 instance-store hvm sg-6ef9907f x86_64

Register the instance with the ELB:

# eulb-register-instances-with-lb --instances i-45c1ebd1 hasp-euca-lb --region account2-user11@
INSTANCE i-45c1ebd1

Generate some traffic to the ELB using curl or some other tool to populate the HAProxy log file.  Based upon how often the cronjob was set to execute, use s3cmd to see the bucket created in the ‘eucalyptus’ account (i.e. Cloud Administrator) for the access logs.  For information regarding s3cmd configuration files, refer to my previous blog:

# ./s3cmd/s3cmd --config=.s3cfg-cloud-admin ls
2014-09-18 02:59 s3://51c700-download-manifests
2014-12-11 04:31 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-09-18 02:43 s3://centos-6.5-x86_64-20140917
2014-09-18 02:46 s3://centos-7-x86_64-20140917
2014-11-05 22:05 s3://centos6.4-kernel
2014-11-05 21:54 s3://centos6.4-ramdisk
2014-11-05 22:08 s3://centos6.4-test
2014-09-18 02:52 s3://debian-7-x86_64-20140917
# ./s3cmd/s3cmd --config=.s3cfg-cloud-admin ls s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-12-11 04:31 817 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-043122.log
2014-12-11 05:13 78764 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log
2014-12-11 05:20 58202 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-052002.log

Once that has been confirmed, create another s3cmd configuration file for the ‘non-eucalyptus’ user, and confirm the user can list the contents of the bucket:

# ./s3cmd/s3cmd --config=.s3cfg-acct2-user11 ls s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-12-11 04:31 817 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-043122.log
2014-12-11 05:13 78764 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log
2014-12-11 05:20 58202 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-052002.log

After that has been confirmed, download one of the log files and confirm the contents:

# ./s3cmd/s3cmd --config=.s3cfg-acct2-user11 get s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log .
s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log -> ./elb-access-11122014-051353.log [1 of 1]
 78764 of 78764 100% in 0s 238.84 kB/s done
 
# cat elb-access-11122014-051353.log
Dec 11 04:32:11 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:49960 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:35 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50216 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:36 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 4 0 1 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50217 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:38 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 5 0 0 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50218 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:39 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50219 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:40 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50220 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:41 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50221 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:42 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 4 0 1 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50222 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
.....

How is this ‘non-eucalyptus’ user able to see and download the contents of this bucket?  This is because of the script that creates the access log bucket, and uploads the logs to the bucket.  By grabbing the account ID from the instance metadata ‘security group’ category, the script adds bucket and object READ ACLs for the account ID.  The only issue here is that the cloud administrator will still need to communicate the bucket that the cloud user can access for the logs.  With the extra bonus of using the object lifecycle, the cloud administrator doesn’t have to worry about managing the buckets.  The objects will remove themselves after the define period of time.

Conclusion

Even though the solution isn’t exactly like AWS ELB Access Logs feature, it does provide a solution that is very similar to it.  The only thing missing is the service API interaction to enable/disable the access logging feature, set the interval and define the bucket that will be used.  Hopefully, this will be a feature we will see in the not too distant feature.  Thanks for hanging in there with me.  I hope you enjoy!  Feedback is always welcome.

Cheers!

Adding Eucalyptus Load Balancer Access Logging for Eucalyptus Cloud Users

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

ZFS is a filesystem designed by Sun Microsystems that focuses on data integrity.  What makes this such an attractive filesystem to use in the cloud is that a cloud user can easily do the following:

  • set up an LVM + RAID filesystem for storing large amounts of data (e.g. database information)
  • expand the filesystem by adding more storage (i.e. EBS volumes)
  • backup the filesystem without taking the filesystem offline/unmounting
  • restore the filesystem

This blog entry will focus on how a cloud user can create their own Eucalyptus Machine Image (EMI) that has ZFS support.  The CentOS 6.5 EMI on the Eucalyptus Machine Image Catalog will be used as the base image.

Before Starting…

Before following the steps in this blog, make sure the following is in place:

Once these requirements have been met, everything should be ready to go.

Set Up Base Image/Instance

To begin, follow the ‘Quick Start’ instructions mentioned on the Eucalyptus Machine Image Catalog page.  This will install all the images provided by the catalog.  When the process has finished, list the CentOS 6.5 EMI.  For example:

# euca-describe-images emi-bdcec010 
IMAGE emi-bdcec010 centos-6.5-x86_64-20140917/centos.raw.manifest.xml 094999295155 available public x86_64 machine instance-store hvm

Once the CentOS 6.5 EMI has been listed, launch an instance from the EMI.  For example:

# euca-run-instances -k account2-user11 -t m1.medium emi-bdcec010 
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 pending account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-6ef9907f x86_64
# euca-describe-instances i-b9fccf9f
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-178.future.internal running account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 10.104.7.15 172.17.248.178 instance-store hvm sg-6ef9907f x86_64

Once the instance is running, its ready to be customized.

Adding ZFS Support to the Instance

Now that the instance is running, SSH into the instance so the following ZFS repository can be added:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
[root@euca-172-17-248-178 ~]# yum upgrade -y
[root@euca-172-17-248-178 ~]# yum install kernel-devel zfs -y

After all the packages have been installed, reboot the instance:

[root@euca-172-17-248-178 ~]# reboot

Preparing the Instance For EMI Creation

After rebooting the instance, SSH back into the instance and prepare the instance for EMI creation.  First, load the zfs module:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# modprobe zfs
[root@euca-172-17-248-178 ~]# lsmod | grep zfs
zfs 1195522 0
zcommon 46278 1 zfs
znvpair 80974 2 zfs,zcommon
zavl 6925 1 zfs
zunicode 323159 1 zfs
spl 266655 5 zfs,zcommon,znvpair,zavl,zunicode

After confirming that the ZFS module is loaded, clear the network udev rules, and confirm PERSISTENT_DHCLIENT is set to “yes” in the /etc/sysconfig/network-scripts/ifcfg-eth0 file:

[root@euca-172-17-248-178 ~]# echo "" > /etc/udev/rules.d/70-persistent-net.rules
[root@euca-172-17-248-178 ~]# echo "" > /lib/udev/rules.d/75-persistent-net-generator.rules
[root@euca-172-17-248-178 ~]# echo "PERSISTENT_DHCLIENT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0

Confirm that the instance has been upgraded to CentOS 6.6, then exit the instance.

[root@euca-172-17-248-178 ~]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root@euca-172-17-248-178 ~]# exit

Create the CentOS 6.6 EMI with ZFS Support

The instance is now ready to be bundled.  Bundle the instance using the euca-bundle-instance command.  This command is used to bundle Windows instances, however Eucalyptus extended this command to work with Linux instances as well.  Use euca-describe-bundle-tasks to monitor the bundling status:

[root@odc-f-13 ~]# euca-bundle-instance --bucket centos6.6-zfs --prefix centos6.6-zfs i-b9fccf9f
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:54:51.644Z pending 0 centos6.6-zfs/centos6.6-zfs.manifest.xml
..
[root@odc-f-13 ~]# euca-describe-bundle-tasks
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:57:37.517Z complete 0 centos6.6-zfs/centos6.6-zfs.manifest.xml

Once the bundle task completes, register the instance store-backed HVM image using the euca-register command:

[root@odc-f-13 ~]# euca-register -a x86_64 -n centos6.6-zfs centos6.6-zfs/centos6.6-zfs.manifest.xml --virtualization-type hvm 
IMAGE emi-5e63f02c

The custom image has been registered. Now lets test it out.

ZFS Test

To test the image out, we will do the following:

  • Launch an instance from the new EMI
  • Create 5 volumes and attach them to the instance
  • Create a ZFS storage pool and dataset

To launch the instance, use the euca-run-instances command.  To create the 5 EBS volumes, use euca-create-volume command.  After the volumes are created, use euca-attach-volume to attach the volumes to the instance.  Once the volumes are attached, the output of euca-describe-instances should look similar to the following:

# euca-describe-instances i-0cd3b6b8
RESERVATION r-cf7c5c73 325271821652 default
INSTANCE i-0cd3b6b8 emi-5e63f02c euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-184.future.internal running account2-user11 0 m1.medium 2014-12-04T00:16:52.887Z Honest monitoring-disabled 10.104.7.3 172.17.248.184 instance-store hvm sg-6ef9907f x86_64
BLOCKDEVICE /dev/sdd vol-a23cfb1f 2014-12-04T01:45:59.730Z false
BLOCKDEVICE /dev/sdh vol-a27b75a5 2014-12-04T01:47:31.162Z false
BLOCKDEVICE /dev/sdf vol-2a971204 2014-12-04T01:46:54.575Z false
BLOCKDEVICE /dev/sdg vol-b33e9890 2014-12-04T01:47:13.346Z false
BLOCKDEVICE /dev/sde vol-dcc8b6ac 2014-12-04T01:46:15.011Z false

SSH into the instance and check what block devices are associated with the EBS volumes using the lsblk command:

# ssh -i account2-user11.priv root@euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-184 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 4.9G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 4.4G 0 part
 ├─VolGroup-lv_root (dm-0) 253:0 0 3.9G 0 lvm /
 └─VolGroup-lv_swap (dm-1) 253:1 0 500M 0 lvm [SWAP]
vdb 252:16 0 5.1G 0 disk
vdc 252:32 0 5G 0 disk
vdd 252:48 0 5G 0 disk
vde 252:64 0 5G 0 disk
vdf 252:80 0 5G 0 disk
vdg 252:96 0 5G 0 disk

The EBS volumes are /dev/vdc, /dev/vdd, /dev/vde, /dev/vdf, and /dev/vdg.  Use these devices to create the ZFS storage pool by using the zpool command:

[root@euca-172-17-248-184 ~]# zpool create -f app-pool vdc vdd vde vdf vdg
[root@euca-172-17-248-184 ~]# zpool status
 pool: app-pool
 state: ONLINE
 scan: none requested
config:
 NAME STATE READ WRITE CKSUM
 app-pool ONLINE 0 0 0
 vdc1 ONLINE 0 0 0
 vdd1 ONLINE 0 0 0
 vde1 ONLINE 0 0 0
 vdf1 ONLINE 0 0 0
 vdg1 ONLINE 0 0 0
errors: No known data errors

Next, we need to create a ZFS dataset.  For this example, this instance will end up being a MySQL server, so we will create a dataset for storing the MySQL data.

[root@euca-172-17-248-184 ~]# zfs create app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 152K 24.5G 30K /app-pool
app-pool/mysql 30K 24.5G 30K /app-pool/mysql

The mount point of the dataset can be adjusted by setting the mountpoint option:

[root@euca-172-17-248-184 ~]# zfs set mountpoint=/opt/mysql app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 162K 24.5G 31K /app-pool
app-pool/mysql 30K 24.5G 30K /opt/mysql

Thats it!  Notice how this only required 2 commands to set up a LVM + RAID filesystem, compared to around 7 commands using mdadm, pvcreate, vgcreate, mkfs, mkdir and mount. The instance is now ready to utilize the ZFS filesystem for the MySQL server.

Online Backup Example to OSG Bucket using s3cmd

As mentioned earlier, a slick feature of using ZFS is being able to perform backups online.  This section will show the following:

  • Setup and configure s3cmd
  • Create a ZFS snapshot, and use ZFS send with s3cmd to place the snapshot on an OSG bucket

To get started, in the instance, install the following packages:

[root@euca-172-17-248-184 ~]# yum install -y git python-dateutil.noarch xz

Next, clone the s3tools/s3cmd repository from Github:

[root@euca-172-17-248-184 ~]# git clone https://github.com/s3tools/s3cmd.git

If the instance was launched with an instance profile that assumes a role with OSG (S3) API access, s3cmd will pick up the temporary credentials and token through the Eucalyptus instance metadata service, as if the instance was launched on AWS EC2.  This wasn’t the case here, so we need to provide the Access Key ID and Secret Key manually:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: AKIRAGCHAGFE6IIX9BYF
Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
 Access Key: AKIRAGCHAGFE6IIX9BYF
 Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L
 Encryption password:
 Path to GPG program: /usr/bin/gpg
 Use HTTPS protocol: False
 HTTP Proxy server name:
 HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

Edit the .s3cfg file to make sure to point to the OSG on your Eucalyptus 4.0.2 cloud.  For example, change the following:

host_base = s3.amazonaws.com

to

host_base = objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

and

host_bucket = %(bucket)s.s3.amazonaws.com

to

host_bucket = %(bucket)s.objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

Confirm that s3cmd is configured correctly.  For example:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls
2014-11-05 21:45 s3://centos-images
2014-12-03 23:54 s3://centos6.6-zfs
2014-10-08 01:50 s3://instance-profile-testing
2014-12-01 22:27 s3://mongodb-snapshots
2014-10-10 20:01 s3://new-ubuntu-bundled-image
2014-09-17 18:31 s3://s3cmd-testing
2014-09-30 01:58 s3://ubuntu-bundled-vol
2014-10-22 14:47 s3://ubuntu-docker-template
2014-10-08 13:39 s3://ubuntu-images
2014-10-02 01:42 s3://ubuntu-trusty-imported-20141001
2014-10-30 18:25 s3://ubuntu-trusty-imported-20141030
2014-10-29 02:18 s3://ubuntu-trusty-server-10282014
2014-10-01 00:28 s3://wrong-s3-url-test

To perform a ZFS snapshot of the app-pool/mysql dataset, do the following:

[root@euca-172-17-248-184 ~]# zfs snapshot app-pool/mysql@wednesday
[root@euca-172-17-248-184 ~]# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
app-pool/mysql@wednesday 0 - 30K -

After creating a bucket for the backup, send the ZFS snapshot to the bucket:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd mb s3://mysql-backups
[root@euca-172-17-248-184 ~]# zfs send app-pool/mysql@wednesday | xz | ./s3cmd/s3cmd put - s3://mysql-backups/mysql-backup-wednesday.img.xz
<stdin> -> s3://mysql-backups/mysql-backup-wednesday.img.xz [part 1, 1440B]
 1440 of 1440 100% in 2s 561.67 B/s done

To confirm if the snapshot is located in the bucket, use s3cmd:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls s3://mysql-backups
2014-12-04 02:22 1440 s3://mysql-backups/mysql-backup-wednesday.img.xz

Thats all folks.  We have successfully created a CentOS 6.6 EMI with ZFS support.  For more information regarding ZFS (and inspirations for this blog), check out the following resources:

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

Using Eucalyptus 4.0.1 CloudFormation to Deploy a CoreOS (Docker) Cluster

In a previous blog, I discussed how cloud-init can be used to customize a CoreOS image deployed as an instance on Eucalyptus – which happens to work in the same fashion on AWS.  This is a follow-up blog to demonstrate how to use Eucalyptus Cloudformation (which is in Tech Preview in Eucalyptus 4.0.0/4.0.1) to deploy a CoreOS cluster on Eucalyptus, customizing each instance using the cloud-config service.  This setup will allow cloud users to test out CoreOS clusters on Eucalyptus, just as CoreOS recommends on AWS EC2.

Prerequisites

Just as in the previous blog discussing the use of CoreOS, using Eucalyptus IAM is highly recommended.  In addition, to the prerequisites mentioned in that blog, the following service API actions need to be allowed (at a minimum) in the IAM policy for the user(s) that want to utilize this blog:

In addition to having the correct IAM policy actions authorized, the cloud user needs to be using the latest version of euca2ools with Eucalyptus 4.0.1.  Once these prerequisites are met, the Eucalyptus cloud needs to be prepared with the correct EMI for the deployment.

Adding CoreOS Image To Eucalyptus

In order to deploy an CoreOS cluster on Eucalyptus, the CoreOS image needs to be bundled, uploaded and registered.  To obtain the CoreOS image, download the image from the CoreOS Beta Release site. For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_ami_image.bin.bz2
 # bunzip2 -d coreos_production_ami_image.bin.bz2
 # qemu-img info coreos_production_ami_image.bin
 image: coreos_production_ami_image.bin
 file format: raw
 virtual size: 4.4G (4699717632 bytes)
 disk size: 4.4G

Once the image has been downloaded and user credentials have been sourced, use euca-install-image to bundle, upload and register the image as an instance store-backed HVM image to be used with the Cloudformation template. In addition, note the EC2_USER_ID value present in the eucarc file as it will be used with the Cloudformation template as well.

# euca-install-image -b coreos-production-ami -i coreos_production_ami_image.bin --virtualization-type hvm -n coreos-hvm -r x86_64
 ....
 /var/tmp/bundle-WsLdGB/coreos_production_ami_image.bin.part.19 100% |=================================================================| 6.08 MB 12.66 MB/s Time: 0:00:00
 /var/tmp/bundle-WsLdGB/coreos_production_ami_image.bin.manifest.xml 100% |============================================================| 6.28 kB 2.66 kB/s Time: 0:00:02
 IMAGE emi-DAB316FD

CoreOS etcd Discovery Service Token

CoreOS uses a service called etcd on each machine to handle coordination of services in a cluster.  To make sure the machines know that they are part of the same cluster, a discovery token needs to be generated and shared with each instance using the cloud-config service.  To generate a custom token, open a browser and go to the following URL:

https://discovery.etcd.io/new

The URL similar to the example below should show up in the browser:

https://discovery.etcd.io/7b67f765e2f264cf65b850a849a7da7e

Take note of the URL because it will be needed later.

Select VM Type and Availability Zone on Eucalyptus

Before deploying the CoreOS cluster on Eucalyptus, the user needs to determine the instance type, and the availability zone (Eucalyptus Cluster). In order to do this, use euca-describe-instance-types to show the instance types, availability zone(s), and the capacity for each instance type available in the availability zone(s).

# euca-describe-instance-types --show-capacity --by-zone
 AVAILABILITYZONE SirLuciousLeftFoot
 INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB) Used / Total Used %
 INSTANCETYPE t1.micro 1 256 5 0 / 6 0%
 INSTANCETYPE m1.small 1 512 10 0 / 6 0%
 INSTANCETYPE m1.medium 1 1024 10 0 / 6 0%
 INSTANCETYPE c1.xlarge 2 2048 10 0 / 3 0%
 INSTANCETYPE m1.large 2 1024 15 0 / 3 0%
 INSTANCETYPE c1.medium 1 1024 20 0 / 6 0%
 INSTANCETYPE m1.xlarge 2 1024 30 0 / 3 0%
 INSTANCETYPE m2.2xlarge 2 4096 30 0 / 3 0%
 INSTANCETYPE m3.2xlarge 4 4096 30 0 / 1 0%
 INSTANCETYPE m2.xlarge 2 2048 40 0 / 3 0%
 INSTANCETYPE m3.xlarge 2 2048 50 0 / 3 0%
 INSTANCETYPE cc1.4xlarge 8 3072 60 0 / 0
 INSTANCETYPE m2.4xlarge 8 4096 60 0 / 0
 INSTANCETYPE hi1.4xlarge 8 6144 120 0 / 0
 INSTANCETYPE cc2.8xlarge 16 6144 120 0 / 0
 INSTANCETYPE cg1.4xlarge 16 12288 200 0 / 0
 INSTANCETYPE cr1.8xlarge 16 16384 240 0 / 0
 INSTANCETYPE hs1.8xlarge 48 119808 24000 0 / 0
AVAILABILITYZONE ViciousLiesAndDangerousRumors
 INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB) Used / Total Used %
 INSTANCETYPE t1.micro 1 256 5 4 / 12 33%
 INSTANCETYPE m1.small 1 512 10 4 / 12 33%
 INSTANCETYPE m1.medium 1 1024 10 4 / 12 33%
 INSTANCETYPE c1.xlarge 2 2048 10 2 / 6 33%
 INSTANCETYPE m1.large 2 1024 15 2 / 6 33%
 INSTANCETYPE c1.medium 1 1024 20 4 / 12 33%
 INSTANCETYPE m1.xlarge 2 1024 30 2 / 6 33%
 INSTANCETYPE m2.2xlarge 2 4096 30 0 / 2 0%
 INSTANCETYPE m3.2xlarge 4 4096 30 0 / 2 0%
 INSTANCETYPE m2.xlarge 2 2048 40 2 / 6 33%
 INSTANCETYPE m3.xlarge 2 2048 50 2 / 6 33%
 INSTANCETYPE cc1.4xlarge 8 3072 60 0 / 0
 INSTANCETYPE m2.4xlarge 8 4096 60 0 / 0
 INSTANCETYPE hi1.4xlarge 8 6144 120 0 / 0
 INSTANCETYPE cc2.8xlarge 16 6144 120 0 / 0
 INSTANCETYPE cg1.4xlarge 16 12288 200 0 / 0
 INSTANCETYPE cr1.8xlarge 16 16384 240 0 / 0
 INSTANCETYPE hs1.8xlarge 48 119808 24000 0 / 0

For this blog, the availability zone ‘ViciousLiesAndDangerousRumors’ and the instance type ‘c1.medium’ will be used as a parameter for the Cloudformation template.  Now, Eucalyptus Cloudformation is ready to be used.

Deploying the CoreOS Cluster

Final Preparations

Before using the Cloudformation template for the CoreOS cluster, a keypair needs to be created.  This keypair will also be used as a parameter for the Cloudformation template.

To obtain the template, download the template from coreos-cloudformation-template bucket on AWS S3.  Once the file has been downloaded, the following edits need to happen.

The first edit is to define the ‘AvailabilityZones’ in the ‘Properties’ section of the ‘CoreOsGroup’ resource.  For example, ‘ViciousLiesAndDangerousRumors’ has been placed as the value for ‘AvailabilityZones’:

"CoreOsGroup" : {
 "Type" : "AWS::AutoScaling::AutoScalingGroup",
 "Properties" : {
 "AvailabilityZones" : [ "ViciousLiesAndDangerousRumors" ],
 "LaunchConfigurationName" : { "Ref" : "CoreOsLaunchConfig" },
 "MinSize" : { "Ref" : "ClusterSize" },
 "MaxSize" : { "Ref" : "ClusterSize" }
 }
 },

The second and final edit, is to update the ‘UserData’ property to have the correct value for the discovery token that was provided earlier in this blog.  For example:

"UserData" : { "Fn::Base64" : { "Fn::Join" : ["",[
 "#cloud-config","\n",
 "coreos:","\n",
 " etcd:","\n",
 " discovery: https://discovery.etcd.io/7b67f765e2f264cf65b850a849a7da7e","\n",
 " addr: $private_ipv4:4001","\n",
 " peer-addr: $private_ipv4:7001","\n",
 " units:","\n",

Now that these values have been updated, the CoreOS cluster can be deployed.

Create the Stack

To deploy the cluster, use euform-create-stack with the parameter values filled in appropriately.  For example:

# euform-create-stack --template-file cfn-coreos-as.json --parameter "CoreOSImageId=emi-DAB316FD" --parameter "UserKeyPair=account1-user01" --parameter "AcctId=408396244283" --parameter "ClusterSize=3" --parameter "VmType=c1.medium" CoreOSClusterStack
 arn:aws:cloudformation:bigboi:408396244283:stack/CoreOSClusterStack/43d53adb-68f2-4317-bd2b-3da661977ebc

The ‘ClusterSize’ parameter is completely dependent upon how big of a CoreOS cluster the user would like to have based upon the instance types supported on the Eucalyptus cloud.  Please refer to the CoreOS documentation regarding optimal cluster sizes to see what would best suit the use case of the cluster.

Check Out The Stack Resources

After deploying the Cloudformation stack, after a few minutes, use euform-describe-stacks to check the status of the stack. The status of the stack should return with CREATE_COMPLETE.

# euform-describe-stacks
 STACK CoreOSClusterStack CREATE_COMPLETE Complete! Deploy CoreOS Cluster 2014-08-28T22:31:02.669Z
 OUTPUT AutoScalingGroup CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG

To check out the resources associated with the Cloudformation stack, use euform-describe-stack-resources:

# euform-describe-stack-resources -n CoreOSClusterStack --region account1-user01@
 RESOURCE CoreOsSecurityGroupIngress2 CoreOsSecurityGroupIngress2 AWS::EC2::SecurityGroupIngress CREATE_COMPLETE
 RESOURCE CoreOsLaunchConfig CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB AWS::AutoScaling::LaunchConfiguration CREATE_COMPLETE
 RESOURCE CoreOsSecurityGroup CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC AWS::EC2::SecurityGroup CREATE_COMPLETE
 RESOURCE CoreOsSecurityGroupIngress1 CoreOsSecurityGroupIngress1 AWS::EC2::SecurityGroupIngress CREATE_COMPLETE
 RESOURCE CoreOsGroup CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG AWS::AutoScaling::AutoScalingGroup CREATE_COMPLETE

Check the status of the instances by using the value returned for ‘AutoScalingGroup’ from the euform-describe-stacks output:

# euscale-describe-auto-scaling-groups CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG --region account1-user01@
 AUTO-SCALING-GROUP CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB ViciousLiesAndDangerousRumors 3 33 Default
 INSTANCE i-E6FB62D0 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB
 INSTANCE i-2AC4CC35 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB
 INSTANCE i-442C4692 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB

Check the Status of the CoreOS Cluster

In order to check the status of the CoreOS cluster, SSH into one of the instances (the port was opened in the security group as part of the Cloudformation template), and use the fleetctl command:

# euca-describe-instances i-E6FB62D0 i-2AC4CC35 i-442C4692 --region account1-user01@
 RESERVATION r-AF98046C 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-2AC4CC35 emi-DAB316FD euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-223-111.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:48.043Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.223.111 instance-store hvm d88cac3d-ce92-4c3b-98ee-7e507afc26cb_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-2AC4CC35 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
 RESERVATION r-A24611A2 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-442C4692 emi-DAB316FD euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-223-227.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:48.056Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.235 172.18.223.227 instance-store hvm 1281a747-69a7-4f26-8fe2-2dea6b8b858d_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-442C4692 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
 RESERVATION r-089053BE 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-E6FB62D0 emi-DAB316FD euca-10-104-6-232.bigboi.acme.eucalyptus-systems.com euca-172-18-223-222.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:38.146Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.232 172.18.223.222 instance-store hvm c0dc6cca-5fa3-4614-a4ec-8a902bf6ff66_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-E6FB62D0 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
# ssh -i account1-user01/account1-user01.priv core@euca-10-104-6-232.bigboi.acme.eucalyptus-systems.com
 Last login: Thu Aug 28 15:32:34 2014 from 10.104.10.55
 CoreOS (beta)
 core@euca-172-18-223-222 ~ $ fleetctl list-machines -full=true
 MACHINE IP METADATA
 6f4e3de463490a7644e3d7c80d826770 172.18.223.227 -
 929c1f121860c63b506c0b951c19de7b 172.18.223.222 -
 a08155346fb55f9b53b154d6447af0fa 172.18.223.211 -
 core@euca-172-18-223-222 ~ $

The cluster status can also be checked by going to the discovery token URL that was placed in the Cloudformation template.

CoreOS etcd discovery cluster listing

Conclusion

Just as on AWS, Cloudformation can be used to deploy a CoreOS cluster on Eucalyptus.  Users will be able to test out different use cases, such as Cluster-Level Container Development with fleet, or get more familiar with CoreOS by going through the CoreOS documentation.  As always, feel free to ask any questions.  Feedback is always welcome.

Enjoy!

Using Eucalyptus 4.0.1 CloudFormation to Deploy a CoreOS (Docker) Cluster

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Recently, I was logging into my AWS account, where I have multi-factor authentication (MFA) enabled, using the Google Authenticator application on my smart phone.  This inspired me to research how to enable MFA for any Linux distribution.  I ran across the following blog entries:

From there, I figured I would try to create a Eucalyptus EMI that would support three-factor authentication on a Eucalyptus 4.0 cloud.  The trick here was to figure out how to display the Google Authenticator information so users could configure Google Authenticator.  The euca2ools command ‘euca-get-console-output‘ proved to be the perfect mechanism to provide this information to the cloud user.  This blog will show how to configure an Ubuntu Trusty (14.04) Cloud image to support three-factor authentication.

Prerequisites

In order to leverage the steps mentioned in this blog, the following is needed:

Now that the prereqs have been mentioned, lets get started.

Updating the Ubuntu Image

Before we can update the Ubuntu image, let’s download the image:

[root@odc-f-13 ~]# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

After the image has been downloaded successfully, the image needs to be converted to a raw format.  Use qemu-img for this conversion:

[root@odc-f-13 ~]# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

After converting the image to a raw format, we need to mount it in order to update the image accordingly:

[root@odc-f-13 ~]# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
[root@odc-f-13 ~]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
[root@odc-f-13 ~]# mkdir /mnt/ubuntu
[root@odc-f-13 ~]# mount /dev/mapper/loop0p1 /mnt/ubuntu
[root@odc-f-13 ~]# chroot /mnt/ubuntu

The above command ‘chroot’ allows us to edit the image as if its the current running Linux operating system.  We have to install a couple of packages in the image.  Before we do, use the resolvconf to create the necessary information in /etc/resolv.conf.

root@odc-f-13:/# resolvconf -I

Confirm the settings are correct by running ‘apt-get update’:

root@odc-f-13:/#  apt-get update

Once that command runs successfully, install the PAM module for Google Authenticator and the whois package:

root@odc-f-13:/# apt-get install libpam-google-authenticator whois

After these packages have been installed, run the ‘google-authenticator’ command to see all the available options:

root@odc-f-13:/# google-authenticator --help
google-authenticator [<options>]
 -h, --help Print this message
 -c, --counter-based Set up counter-based (HOTP) verification
 -t, --time-based Set up time-based (TOTP) verification
 -d, --disallow-reuse Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse Allow reuse of previously used TOTP tokens
 -f, --force Write file without first confirming with user
 -l, --label=<label> Override the default label in "otpauth://" URL
 -q, --quiet Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N Limit logins to N per every M seconds
 -R, --rate-time=M Limit logins to N per every M seconds
 -u, --no-rate-limit Disable rate-limiting
 -s, --secret=<file> Specify a non-standard file location
 -w, --window-size=W Set window of concurrently valid codes
 -W, --minimal-window Disable window of concurrently valid codes

Updating PAM configuration

Next the PAM configuration file /etc/pam.d/common-auth needs to be updated.  Find the following line in that file:

auth[success=1 default=ignore]pam_unix.so nullok_secure

Replace it with the following lines:

auth requisite pam_unix.so nullok_secure
auth requisite pam_google_authenticator.so
auth [success=1 default=ignore] pam_permit.so

Next, we need to update SSHD configuration.

Update SSHD configuration

We need to modify the /etc/ssh/sshd_config file to help make sure the Google Authenticator PAM module works successfully.  Modify/add the following lines to the /etc/ssh/sshd_config file:

ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

Updating Cloud-Init Configuration

The next modification involves enabling the ‘ubuntu‘ user to have a password.  By default, the account is locked (i.e. doesn’t have a password assigned) in the cloud-init configuration file.  For this exercise, we will enable it, and assign a password.  Just like the old Ubuntu Cloud images, we will assign the ‘ubuntu‘ user the password ‘ubuntu‘.

Use ‘mkpasswd‘ as mentioned in the cloud-init documentation to create the password for the user:

root@odc-f-13:/# mkpasswd --method=SHA-512
Password:
$6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

In the file /etc/cloud/cloud.cfg, find the section ‘default_user‘.  Change the following line from:

lock_passwd: True

to

lock_passwd: False
passwd: $6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

The value for the ‘passwd‘ option is the output from the mkpasswd command executed earlier.

Updating /etc/rc.local

The final update to the image is to add some bash code to the /etc/rc.local file.   The reason for this update is so the information for configuring Google Authenticator with the instance can be presented to the user through the output of ‘euca-get-console-output‘.  Add the following code to the /etc/rc.local file above the ‘exit 0‘ line:

if [ ! -f /home/ubuntu/.google_authenticator ]; then
 /bin/su ubuntu -c "google-authenticator -t -d -f -r 3 -R 30 -w 4" > /root/google-auth.txt
 echo "############################################################"
 echo "Google Authenticator Information:"
 echo "############################################################"
 cat /root/google-auth.txt
 echo "############################################################"
fi

Thats it!  Now we need to bundle, upload and register the image.

Bundle, Upload and Register the Image

Since we are using an HVM image, we don’t have to worry about the kernel and ramdisk.  We can just bundle, upload and register the image.  To do so, use the euca-install-image command.  Before we do that, we need to exit out of the chroot environment and unmount the image:

root@odc-f-13:/# exit
[root@odc-f-13 ~]# umount /mnt/ubuntu
[root@odc-f-13 ~]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 ~]# losetup -d /dev/loop0

After unmounting the image, bundle, upload and register the image with the euca-install-image command:

[root@odc-f-13 ~]# euca-install-image -b ubuntu-trusty-server-google-auth-x86_64-hvm -i trusty-server-cloudimg-amd64-disk1.raw --virtualization-type hvm -n trusty-server-google-auth -r x86_64
/var/tmp/bundle-Q8yit1/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml 100% |===============| 7.38 kB 3.13 kB/s Time: 0:00:02
IMAGE emi-FF439CBA

After the image is registered, launch the instance with a keypair that has been created using the ‘euca-create-keypair‘ command:

[root@odc-f-13 ~]# euca-run-instances -k account1-user01 -t m1.medium emi-FF439CBA
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA pending account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached the ‘running’ state, use ‘euca-get-console-ouptut’ to grab the Google Authenticator information:

[root@odc-f-13 ~]# euca-describe-instances i-48D98090
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com euca-172-18-238-157.bigboi.internal running account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.237 172.18.238.157 instance-store hvm sg-A5133B59
[root@odc-f-13 ~]# euca-get-console-output i-48D98090
.......
############################################################
Google Authenticator Information:
############################################################
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX
Your new secret key is: 2MGKGDZTFLVE5LCX
Your verification code is 275414
Your emergency scratch codes are:
 59078604
 17425999
 89676696
 65201554
 14740079
############################################################
.....

Now we are ready to test access to the instance.

Testing Access to the Instance

To test access to the instance, make sure the Google Authenticator application is installed on your smart phone/hand-held device.  Next, copy the URL seen in the output (e.g. https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX) from ‘euca-get-console-output’, and past it into a browser:

OTPAUTH URL for Google Authenticator
OTPAUTH URL for Google Authenticator

Use the ‘Google Authenticator’ application on your smart phone/hand-held device, and scan the QR Code:

Google Authenticator Application
Google Authenticator Application

 

Google Authenticator Application - Set Up Account
Google Authenticator Application – Set Up Account

 

After selecting the ‘Set up account‘ option, select ‘Scan a barcode‘, hold your smartphone/hand-held device to the screen where your browser is showing the QR code, and scan:

Google Authenticator Application - Scan Barcode
Google Authenticator Application – Scan Barcode

 

After scanning the QR code, you should see the account get added, and the verification codes begin to populate for the account:

Verification Code For Instance
Verification Code For Instance

 

Finally, SSH into the instance using the following:

  • the private key of the keypair used when launching the instance with euca-run-instances
  • the password ‘ubuntu
  • the verification code displayed in Google Authenticator for the new account added

With the information above, the SSH authentication should look similar to the following:

[root@odc-f-13 ~]# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com (10.104.6.237)' can't be established.
RSA key fingerprint is c9:37:18:66:e3:ee:66:d2:8a:ac:a4:21:a6:84:92:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com,10.104.6.237' (RSA) to the list of known hosts.
Authenticated with partial success.
Password:
Verification code:
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-32-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Mon Jul 21 13:23:48 UTC 2014

System load: 0.0 Memory usage: 5% Processes: 68
 Usage of /: 56.1% of 1.32GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@euca-172-18-238-157:~$

Three-factor authentication has been successfully configured for the Ubuntu cloud image.  If cloud administrators would like to use different authentication for the instance user, I suggest investigating how to set up PAM LDAP authentication, where SSH public keys are stored in OpenLDAP.  In order to do this, the Ubuntu image  would have to be updated to work.  I would check out the ‘sss_ssh_authorizedkeys‘ command, and the pam-script module to potentially help get this working.

Enjoy!

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances