vSEC For Public Cloud Supplemental Guide
This document has supporting information for CPdeployDoc Help
It is just a few variable in a file that will make or break your controller setup. Keep in mind, this tool is doing nothing more than collecting just enough variables for the API to build what we need. It can get very configurable, but what is key is knowing which pieces are important. When you run ŌcpconfigÕ you will be presented with a plain text file that is going to need some information. ItÕs really not as bad as it looks, and this section will show you where in your cloud service webgui to find the information needed to get up and running, so this is as much a walkthrough of the webgui, as it is the console tool. The best tools in the world will not help you until you know where to go for the right information, this section focuses on showing you where to look, itÕs up to you to configure and try it. If it doesnÕt work, check again, once you have the right
The editor is called nano, and should be the same as using a tool like notepad. You can read more information about the editor at http://www.nano-editor.org/docs.php , but is not needed. Use your arrow keys to get around. (For those of you that prefer older style Unix editing, ŌviconfigÕ will open the same file in vim.)
In your aws web console, they offer many services, but for our purposes here, we really only need to focus on three specific ones.
Since we will be bouncing between the consoles, letÕs take a moment to define each area. If you are already comfortable with AWS concepts and sections, you can skip ahead.
This is the section that controls all the networking, and includes many components we will need to configure. Within the VPC, are many subcomponents we need to be aware of around subnets, routing and network interfaces for the gateway. They are all linked, and if created in a flow, the network can be dynamically created if we just tell the configuration where to start.
EC2 is where all the compute configurations live. Details on machine types, states, etc. We manage instances from here and can verify images that we are running as well as what VPC and subnet an instance is attached to.
On the surface, this appears to be domain name services from AWS, but this is listed as technically optional, but highly recommended. Having a hosted name service in AWS allows the demo to use names on the fly, as systems are created. Rather than tracking down every IP that gets generated, the names of your VPC and devices can be updated in real time. But having a hosted zone in the Route 53 service does much more than just propagate names to IP for you. This is a SDN environment, which means programmatic changes to traffic flows allows for granular control over how you distribute and send traffic, and without an Elastic Load Balancer, but letÕs not get ahead of ourselves. Getting your own domain does have a charge, and requires some time to setup (waiting for the domain to be approved and propagated), but is well worth the experience you will have using the automation services configured here.
To make the configuration setup easier, you can edit the top bar and place these three components at the top so they are easily accessible. The rest of this walkthrough will refer to each web section as VPC, EC2, and Route53 respectively, ensure you know how to access these sections in your Amazon web services console:
Now from our console, setting up your account is just a matter of matching the right parts in the web console.
To keep things simple, but still demonstrate geo redundancy, the script works on the concept of east and west. ŌeastÕ refers the variable us-east-1, or the location ŅN. VirginiaÓ in your web console, ŌwestÕ which is N. California in your web console. When you get more comfortable using the controller, you can use this section to change availability zones in your deployments for even further redundancy. You are probably save to leave this as is, or check online to ensure the specific availability zone is up and running.
From the AWS documentation:
To find your regions and Availability Zones using the Amazon EC2 console
1 Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
navigation bar, view the options in the region selector._
Your Availability Zones are listed on the dashboard under Service Health, under Availability Zone Status._
You can select any of these for egwaz, and repeat the process for wgwaz variables.
From within the controller you can run the command checkaz <east/west> for a list of availability zones, so once setup, you donÕt have to keep checking online. When you are comfortable with your availability zone setup, proceed to the next part, Image sizing.
The defaults are the lowest end of security gateways. You can experiment with various sizes if you want, but stick to the c4 recommendations in the file as the gateways are only supported on certain platforms. The default gateways, applications and clients are all designed for lowest cost for supported design. Defaults are fine if this is your first go, but you may want to play with sizing to verify performance as you become more comfortable.
Test Client Subnets
This controller includes the ability to launch simulated clients that generate high rates of traffic. Since they are out of our VPC configuration, we need to place them in the default VPC that AWS provides. You will need to find a subnet ID for both the east and west regions to deploy geographically.
From your AWS web console, login to the VPC section and verify your location is in N. Virginia.
Select Subnets and look for an available subnet in your default VPC. Our controller does not touch the default VPC, so we are guaranteed to place it outside our controller configuration. If you have allot of choices, donÕt get overwhelmed. There are only two things we care about. That it is using the default VPC, and assigns public IP addresses. Select a subnet using the tag: default and verify it assigns public IP addresses (they all do by default, which is scary if you think about it).
We have a winner, copy the subnet ID: to the eclientsubnet variable.
Now we repeat for the west, by changing region in your upper right corner, right by your name select the city and choose N. California.
You should still be in the same subnets section, just in the west location. Look for a default VPC in the same fashion. Even if you donÕt find a specific tag in the names for default vpc, the properties will always be the same, and they will be in the 172. Network. Even if you have many, they likely all will work.
This section of the configuration file is required for you to fill out, and will not work unless you apply subnet idÕs specific to your environment.
Once you have filled in both east and west subnet ids in this section, you have the command ŌtestclientÕ available to you.
Security Management Setup
This demo is really about gateway and security automation, however the ability to launch management in the cloud is just as real. If you already have a management station of R80 type, available, you can utilize it, however, do not run the first.config.app script if you donÕt want us applying policy. It is easier and more stable to deploy a management station that supports the demo. The next section covers how your management station is configured and deployed, if you are using the cloud version, please ensure you have the right settings, or remember them if this is new.
username will be the Smartdashboard administrator name. Avoid admin as it is reserved for the gaia OS. You must also set the password here if you hope to login. The host value can be either IP or name, if you know your DNS is working (or you are using Route53).
Your one time password for the gateways is here as well so both manager and gateway can auto connect without administrator access.
This should be straightforward if you know how to login to your security manager.
The next part of the management setup is the network, and is similar to the client setup from before.
For the workload we are able to push to the management station, the default is best. Setting hi iops gets costly and is not part of the configuration. Contact me offline for details on how.
We are also going to specify the Availability zone for the instances. ŌcheckazÕ from the command line, but by default the script is setup to place the management station in N. Virginia. It is separate and is done for simplicity of scripting. Pick an availability zone in the east for yourself, or leave as default. You can identify the one you need to use for your setup using the process already defined in Availability Zones section above, and specify it in mgplacement variable.
For the security group (mgsecgroup), we will need to ensure that the group provides the right access, and for simplicity sake, is open. You can define a tighter one if you know what youÕre doing, but for the purposes of this test choose an open security group in the default VPC that is open.
Make sure that the inbound and outbound rules are open.
You can also create a new group if you donÕt have one, just set it to open. Once you have a security group defined fits our needs, apply it to the configuration in our controller.
Last, but not least is the subnet ID of the management station. You can technically just let the system pick, and thatÕs fine for most instances, but we need to make sure the management station is alive and well, and it makes troubleshooting easier if we know exactly where it is.
The mgsubnetid value can be found in the subnet section, and must map to the availability zone you chose for mgplacement.
ItÕs in the availability zone (e) we chose, it assigns public IP, itÕs in the default subnet, we have a winner. Use the subnet ID value in your configuration.
Name Services with Route53 (optional)
There is a variable called use_dns, that if set to true, will integrate into the AWS naming service through a hostedID. Do not set if you do not have a hosted domain in AWS, and it is not required as there is a yearly cost to have a personal domain, however. I encourage you to take a look before you dismiss it.
The cost is low, and the capability and ease of use is powerful for this controller. It makes managing devices that change IP as often as you launch them, and having a hosted name that is internet resolvable opens the demo up to the world.
If you have a hosted zone and want to use DNS, go to the Route53 console and select your hosted zones:
Choose the hosted zone you wish to use and put your hostedid and domain name into the configuration.
So what if you donÕt have, but want a hosted domain? There is a register domain button on the Route53 page that will let you check your options.
For $11 dollars, consider it if you plan to do many demos.
Admin Access to VPC
The last section we need to worry about deals with our ability to remotely access the systems in the cloud. The first value is the SSH key that you will use to access the gateways and applications directly. You can find your keyname for each region by going to the EC2 Dashboard and selecting Key Pairs from the menu. The name for your key will be listed, choose one for each region by changing your location from N. Virginia to N. California to cover keys in other regions.
The admin_host and admin_ip values will allow ssh access for that host in the policy automatically. If you have a jumping off point to access the gateways remotely, put the IP and give it a name here, otherwise you can leave it alone. It will give my IP access to SSH from your instances, that I wonÕt know about, but I donÕt have your key, so there isnÕt anything I can do with that. It is safe to just ignore if you donÕt plane to login to the VPC devices. Also note these values and see how the management station changes on first config from the controller.
ThatÕs it, the rest you should leave as default.
This PoC assumes you have an active Amazon Web Services (AWS) and Azure accounts. There is no better way to understand cloud security than trying it for yourself, so get yourself signed up with both public cloud providers.
It is also assumes that you are comfortable working in a shell console i.e., logging into your Gaia console, and that you are able to use Secure Shell (SSH) with private keys for authentication. For example, to access your cloud systems from a windows machine using PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/), you can review general instructions from AWS on how to connect here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html but is not a requirement as this document will include getting logged in as part of the setup. You will also discover, once you are up and running, logging into a security gateway is more for show, then operational aspects.
This PoC is based on AWS, and extends into Azure, so at the very least, ensure you have an active and working AWS account. If this is the first time you have ever logged into your account, or you have never launched a Check Point image in AWS before, you will need to complete a one-time process to authorize your account to launch Check Point images in AWS.
From your AWS web console select EC2.
And in the next screen, select the Launch Instance button:
For our first launch, we need to search the marketplace for the Check Point AMIÕs (Amazon Machine Images) we will be using for our proof of concept.
Once in the marketplace, search for, and select the R77.30 BYOL version for launch.
You can skip through the usual machine setup, and go right to Review.
On the next screen, donÕt worry about any warnings or errors you might see, just go for the launch button.
You will be presented with a pop-up asking you to select or create a key pair for access.
We are not going to worry about keys just yet, so go ahead and choose to proceed without a key pair, and check the box to acknowledge this.
Now press that blue Launch Instances button, and start your first AMI. You should end up at a screen similar to this:
To check on our just launched instance, select the box icon in the upper left corner:
And return to the EC2 console:
We can see a running instance now listed:
Go ahead and click the link, taking you to the EC2 status, which should be Initializing. Select the check box to the left of the instance details:
Using the drop down menu
from Actions, select instance State and click on Terminate.
Confirm you wish to terminate this instance.
Your instance will go from shutting down. . .
That wasnÕt so bad, was it? You just launched and killed your first instance. Now for practice, and to make sure you are authorized for the management station, repeat this process only this time search for, and select: R80 Management (BYOL), launch it, and then terminate it right after. If you are having problems with your configuration, it is sometimes easier to just terminate them and start again, particularly if they are programmable templates versus manually setup systems.
THIS IS ONLY NEEDED ONCE FOR YOUR ACCOUNT. When you have completed this, verify you have the appropriate Check Point subscriptions here:
There is a similar process to enable your account for Azure, login to the azure portal at: https://portal.azure.com (or get signed up right now!)
From your main page, select the New link.
And search for ŌCheck PointÕ to locate the images in Azure. Be sure to select BYOL version.
Choose the image from the menu list:
This will open a menu on the right hand, take a look at the bottom of the page, the link we need is tucked away below. Click that link and review the new page that opens:
At the bottom, you need to enable a subscription to programmatically call images in azure cloud.
Save this, and your are done with all the initial setup. Once you have these accounts, there is no need to repeat this section on subsequent PoC deployments.
Access to virtual instances in the public cloud require a few steps to get setup. If you do not have a key that you can use to access your systems, from the screen you are looking at now, go ahead create a key pair, give it a name you will remember, and download the key pair. This is the only time you can receive these keys. For privacy reasons, the keys are not stored in amazon, meaning you have to download the key now to ever use it. Please make note of where you store this key, and what it is called. We will need it in a subsequent step.
Once you create this key, as long as you save it, you can simply reuse this key in your options, instead of creating new keys every time. You can also create new keys if these are lost or forgotten. But before we continue, it is also important to understand how to use these keys to get around AWS, and in particular, to the controller interface to run the core components of the PoC. You should be able to launch you instance now.
Congratulations, you have a controller service coming online.
If you check back to your EC2 web console, you will see your controller being deployed and starting up. Take a break, it doesnÕt take longer than 4 minutes to come online, so take a short 5 minute break.
. . . . 5 minutes later
The status should be running, select the controller instance in the web gui, and note the IP or Public DNS name of your instance so we know where to connect to.
We will SSH into the IP of your new controller with the username ec2-user. The system will startup on login and initiate the configuration steps needed to start working.
The status should be running, select the controller instance in the web gui, and note the IP or Public DNS name of your instance.
This is where that key you downloaded comes in handy. Password logins are not allowed in AWS, and when our gateways come online, you will see why.
PuTTY is a common and free tool for ssh access for windows and can be downloaded here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
However, before we connect, PuTTY does not natively support the private key format (.pem) generated by Amazon EC2. PuTTY has a tool named PuTTYgen, which can convert keys to the required PuTTY format (.ppk). You must convert your private key into this format (.ppk) before attempting to connect to your instance using PuTTY.
To convert your private key
1 Start PuTTYgen (for example, from the Start menu, click All Programs > PuTTY > PuTTYgen).
2 Under Type of key to generate, select SSH-2 RSA.__
3 Click Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, select the option to display files of all types.____
4 Select your .pem file for the key pair that you specified when you launch your instance, and then click Open. Click OK to dismiss the confirmation dialog box.
5 Click Save private key to save the key in the format that PuTTY can use. PuTTYgen displays a warning about saving the key without a passphrase. Click Yes._Note_A passphrase on a private key is an extra layer of protection, so even if your private key is discovered, it can't be used without the passphrase. The downside to using a passphrase is that it makes automation harder because human intervention is needed to log on to an instance, or copy files to an instance._
6 Specify the same name for the key that you used for the key pair (for example, my-key-pair). PuTTY automatically adds the .ppk file extension.
Your private key is now in the correct format for use with PuTTY. You can now connect to your instance using PuTTY's SSH client.
From the AWS online help for starting a PuTTY session:
Use the following procedure to connect to your controller instance using PuTTY. You'll need the .ppk file that you just created for your private key from when you launched your instance.
Start PuTTY (from the Start menu, click All Programs > PuTTY > PuTTY). In the Category pane, select Session and complete the following fields:
In the Host Name box, enter ec2-user@public_dns_name or IP.
Under Connection type, select SSH.
Ensure that Port is 22.
In the Category pane, expand Connection, expand SSH, and then select Auth. Complete the following:
Select the .ppk file that you generated for your key pair, and then click Open.
(Optional) If you plan to start this session again later, you can save the session information for future use. Select Session in the Category tree, enter a name for the session in Saved Sessions, and then click Save.
Click Open to start the PuTTY session.
If this is the first time you have connected to this instance, PuTTY displays a security alert dialog box that asks whether you trust the host you are connecting to.
Click Yes. A window opens and you are connected to your instance._ Note: _If you specified a passphrase when you converted your private key to PuTTY's format, you must provide that passphrase when you log in to the instance._
If you receive an error while attempting to connect to your instance, see Troubleshooting Connecting to Your Instance.
You are ready to move to the next step and configure your controller if your screen displays the following screen, please click to return and continue your setup.
In case you are wondering what your access key is, let me point you to the place to grab as many as you like. You can share temporary access to allow others to run instances and create VPCÕs, I will show you now with real access keys since by the time you read this they will be long destroyed.
From all the AWS services available, select IAM, for Identity and Access Management.
In the IAM console, select Groups, and letÕs do this right. Easy way is to just create a new administrator, but for now and future use, letÕs create a new group for cpdeploy
Call it whatever you like, you can change it at any time. Select Next Step, for permissions cpdeploy needs AmazonEC2FullAccess and (if using hosted DNS within AWS, we will get back to that later) AmazonRoute53FullAccess to create name entries as well as remove them when when they are taken offline. ItÕs much nicer than tracking IPÕs that always change.
Go ahead and create the group after the final review;
Select the User console from the right and create a new user.
You can create many users, ensure the box is checked to generate a key, and press create:
And look what we have here. Access keys.
IÕm showing you real access keys because by the time you read this, they will be destroyed.
We arenÕt done quite yet, select close at the bottom and go back to your user list, selecting your newly created user.
Choose the group you just created, and select Add to Groups. Once you have added the user, you are all set to use the access keys for the cpdeploy controller.
Back to our controller. . . .
use the access keys to authenticate the controller.
That wasnÕt so bad, was it? You can create many keys, as well as delete them when you delete the user. Once you have your access keys in hand, return to continue your controller setup.