vSEC For Public Cloud


This document assumes you know how to use public cloud services and are comfortable launching instances and connecting to them. This also assumes you are familiar with Check Point Software Technologies. The goal of this document is to bring both of them together for you in a live Proof of Concept Demonstration.


If you have never used Public Cloud services, or have not launched a vSec instance in the public cloud before, please click to start here!


Table of Contents

Introduction. 1

Deploying a Controller in AWS: 1

CPdeploy Controller Startup. 1

Deploying Management and Gateway. 1

Preparing Management for The Cloud. 1

Bringing Your Application Service Online. 1

Deploy a Gateway in Azure. 1

Make Some Noise. 1

Shutting Down. 1





Public cloud can be a complex conversation, with many new terms, and changes to how we currently manage and deploy, not just agile applications, but agile security.  Working in a cloud environment requires a different approach to how we manage security. Building on a strong central management, that scales with the cloud, enforcing the capabilities of R80 are key to completing the full picture for the customer. With your Azure and AWS accounts setup, using only a web browser and a console session with SSH, you can demonstrate an automated web application in the cloud, and expand on this to build complex designs, all without any physical hardware. One of the core goals of this PoC is to not only improve your understanding of public cloud architecture, but to give you a self-sufficient platform for testing and demonstration that you can also share with others. Some elements of this process require you to validate or find information within the AWS web console, which should get you more comfortable using it to troubleshoot your cloud deployments.


Manual deployment is prone to errors, this document uses a very simple tool that acts as a controller to deploy the entire PoC in AWS and Azure. Please treat this as a template only, and when you get your gateways up, and the sites working, feel free to change and alter as you see fit. This is a playground, more than anything. Using the process defined here, you can always deploy a new data center, just remember to terminate the old one. The controller will build the infrastructure in pieces, be sure to review the changes made in the web consoles of the cloud providers to see the full scope of the deployment in the Management console as it does. Once you are comfortable, take this PoC to the next level, teach someone else and try something new.


Deploying a Controller in AWS:


Follow these steps for the CPdeploy as an app:

Launch a new instance, and choose Amazon Linux AMI.


And when ask to select a size for your instance, choose the free tier level. The controller does not require allot of horsepower.



Select Next: to configure the details. The default Network setting to your default VPC should be fine, just ensure that the Auto-assign Public IP is enabled for your subnet.



When that is complete, make sure and scroll to the bottom of the page.


You should find a section called ‘Advanced Details’. Expand this section:





This will expose a text box that we will use to complete the configuration of this host to simply run cpdeploy when we login.

In the text box, paste the following lines exactly as they are (no leading or trailing space).




hostname controller

yum update -y

yum install -y docker

service docker start

usermod -a -G docker ec2-user

curl -o /home/ec2-user/.bash_profile www.killhup.cx/bash_profile

curl -o /home/ec2-user/.bashrc www.killhup.cx/bashrc

curl -o /home/ec2-user/private.cpdeploy.tar.gz www.killhup.cx/private.cpdeploy.tar.gz

cd /home/ec2-user

tar xfz private.cpdeploy.tar.gz

rm -rf private.cpdeploy.tar.gz

cd /home/ec2-user/private.build

docker build -t controller .

chown -R ec2-user /home/ec2-user




You can access a plain text copy to paste at http://app.killhup.cx/cpdeploy


This is a key component of cloud architecture. Any commands pasted here are run as root on first boot, and allows us to autoconfigure servers to simply deploy and start operating as expected. This is the process being applied to our controller.



Continue with Next to select storage options.

The default storage is all that is needed, click next again:



Click next for the optional step of naming your controller. I have applied the simple, but to the point, name of ‘controller’ to our controller in the next example, feel free to get creative, or skip it, either way, click Next.


There will be a default security group allowing SSH into the instance. Create your own, select an existing, put an any any accept it. As long as SSH inbound is allowed, go ahead and click Next, so we can get to the fun part.


Ok, enough clicking Next, ignore any warnings, press the Launch button and let’s get this show on the road.


And then you see this. . .


Uh oh. Not quite there yet. Here come those pesky keys again. You have the choice to create one here, or use an existing. If you have an existing one, you likely have already selected it, launched your instance, and are waiting for the next step.

If you do not have a key pair, or don’t know what this means, or are having a hard time getting connected in the next part,  please read this section before continuing for detailed steps to help you access the controller for the setup.


Congratulations, you have a controller service coming online.



If you check back to your EC2 web console, you will see your controller being deployed and starting up. Take a break, it doesn’t take longer than 4 minutes to come online, so take a short 5 minute break.


. . . . 5 minutes later


The status should be running, select the controller instance in the web gui, and find the IP or Public DNS name of your instance.


Make note of the IP for our console connection to the controller.


SSH into the IP of your new controller with the username ec2-user. The system will start up on login and initiate the configuration steps needed to start working.






CPdeploy Controller Startup


On login, the controller will startup automatically, hit enter to start interaction with the console app.




If no configuration or credentials are available, it will walk you through a first time setup.




Provide access keys from IAM to support your deployment, after which you will be prompted to login to azure (optional).  If you do not have access keys, or want to provide keys for someone else to use an AWS account, please read this section before continuing on.



You will be presented with a token and URL to authenticate with when for login to azure.



Once you have supplied the token to the website, you should receive a login command OK message, and be prompted for your first time config.




The first time configuration is the core of getting everything to work, and as such, the variables required are documented in a separate section for portability. For an explanation of how to complete the first time configuration, please go to the CPdeploy Configuration Walk through for help with this. After completing the first time configuration section, return to this section, and you will be ready to drop into a shell and run the commands as usual, or try out the new console interface.



(You can always get back to the configuration by running ‘cpconfig’ at any time.)




You can exit at any time for shell access, but you can run most commands to start your deployment by pressing enter first for commandline.


Running a command will return the expected input in the console logs. You can use ‘showme’ or ‘helpme’ for additional tips, but most of what you need is in the console.



Included now is the command ‘launchmg <name>’ to deploy your R80 management station in AWS to support your PoC. Proceed to the next section for details on how to deploy a security manager for gateways in the cloud.


Deploying Management and Gateway


Before we do anything, a management station is needed to centrally control our cloud based gateways. These steps should be completed at least an hour before the PoC as the management station takes time to configure and deploy, but the good news is, you won’t be doing anything except running a command.

Hit <enter> and at the command prompt run ’launchmg <name>’ (it can be whatever name you want to call it.)



You should see output similar to the following screen. If you have setup DNS, the management station will be added to the domain and reachable by name. The IP, hostname and admin/password will be provided in the console logs from the values you supplied in the first time config.

During the initialization and startup, of an instance, if will be listed in the coming online field and is not available for production use. There is nothing to do when devices are in this state, they will configure themselves, however do not attempt to access or intervene until they are fully in the green deployed field. An R80 management station can take up to 30 minutes to deploy and configure itself. Please wait until the management is no longer in the coming online state before proceeding.


You can prepare a gateway ahead of time, so let’s get a VPC network ready to support our environment while you wait for the management station to come online.

We will create a VPC called prod1, in east (N. Virginia) using subnet identifier 1.


Let this process run until the name of the VPC appears in the Standby VPC list. It will dump output to the console log section during its run, and is for information only. No actions required until the VPC is in Standby.


In a few minutes, you should see your VPC in the Standby VPCs list like below:




We can now deploy a gateway into the vpc with ‘launchgw <name of VPC>’.




Wait for the output to complete, and the gateway enters the coming online state.



Like the management station, the gateway should be left along until the deployed state.


Time for a break ~20 minutes or so. You can escape and exit out of the controller, the deployment is underway and not dependent on the controller being up or running.



When we do return to our console, we should find that both the management station and the gateway are deployed. This means they are in a ready state. Before we connect our first gateway, we have to prepare the management station for our automated gateways. There are three settings to complete. Connect your smartdashboard to the management station for a one-time configuration next.



Preparing Management for The Cloud


Connect to the smartcenter we deployed with the credentials you defined.




When you login, on the gateways tab you should notice the smartcenter object does not have it’s external IP, since everything is natted in AWS. In order for logs to get to the smartcenter, we need to set the primary IP of the management station to the external IP assigned by AWS.

Select the management object, and replace the primary IP with the public one.



Changed to (in this example) the public IP assigned from our controller deployment:




Once changed, you should select OK and immediately publish your changes.





The next change we need to make is optional, and purely to keep the logs clean. You will find allot of out of state activity in the cloud SDN environment, and while you do want to drop it, you probably don’t want to look at it. Your choice, but I find it distracting, If you want to turn off logging of out of state packets, under Manage & Settings->Blades, choose General Settings.





and under Stateful Inspection, uncheck the option to log out of state packets:




And the last change, which is not optional, is to activate the API server for remote access, essentially creating a programmable interface, much like the cloud deployment we have seen so far.


Under Manage and Settings -> Blades, select Advanced Settings for the Management API.../Desktop/Screen%20Shot%202016-05-29%20at%202.20.24%20AM.png


In the advanced settings, allow access to the API from anywhere and select OK.



This change requires a restart of the api server, usually done from the command line, but in this case we can use the scripts repository to handle this for us.../Desktop/Screen%20Shot%202016-05-29%20at%202.20.54%20AM.png






Then go to gateway and servers, and access the scripts repository:



From here we can create a simple scripted command to restart the APi server.



Create a new script that runs the api restart command:



Save it, and publish your changes. Then from the repository, run the api restart:








You will see a small notice in the corner. Give it a few minutes to restart gracefully.


Please note, that this is not recommended in production as it will produce an error:



The act of restarting the API server causes an error, however in practice, the system works as expected, this is an error due to the return result not getting reported during an api restart. Give the system a few minutes to restart services, and when you see this error (or reconnect and see this error) it is safe to hide this error and proceed to the next page.


At this point you can return to your controller. To verify the API is working, run ‘first.config.app to prepare the policy on the management station to support our gateways in the sky demo. This script should return allot of OK messages to the Console logs. If you see excessive errors, check your Smartcenter audit log to verify if the changes are in place. In some cases, you just need to give it some time to process.




This is the end of our management preparation, review the management station and see what changes have been applied to policy and objects from the ‘first.config.app command. What effect do these changes have on the policy?






Bringing Your Application Service Online


It is now time to bring our VPC to a productive state, along with a live application.

To bring a Standby VPC to online, it must be connected to a deployed gateway. To do that hit enter to get a command line and run

Connect <name of gateway deployed>’


You should see lots of OK’s. Gateway attachment and policy install can take a few minutes. You can verify progress in the Smartdashboard or from the console logs.


During this process, the VPC being connected to the security management station will appear as both standby VPC and online. Let it continue to run, it is in a transitional state and will be done when it is listed only in the VPCs online field.



You should notice a message about threat policy being installed, and the VPC will be listed as online and available for applications.



You can verify the status of the gateway in your smartdashboard. Review the rules it created and see if you can locate objects and subnets matched in your AWS web gui.


To complete the VPC, we will bring an application server online behind the gateway to provide simple web services and validate we are passing real traffic. Only deploy applications in VPC’s that are online, or they will fail to initialize and will have to be terminated. To deploy an application, use launchapp <name of VPC>’ from the console command line.



Like the management stations and the gateways, as the application server is deployed, it will be listed as an app coming online in yellow.



When it has completed its configuration, and ready for service, it will move to the deployed state.



If you are using DNS services within AWS, you should be able to connect to the name of the VPC and domain to access the website:


Or directly to the IP address. Hey, whats the IP you ask?

You can ask the console for the IP of online VPC’s with the command ‘status <name of VPC>’



This will return the IP for you in the console logs:



Verify your website is up and accessible, and check your logs to see what traffic is flowing. Note the number of drops, there are many systems looking for open services, it shouldn’t take long to find one.


Your gateway is online, can you locate the subnet it is protecting in your AWS web console? Bonus points, can you launch your own application into this environment using the AWS web console? Can you access it or it’s service? What did you change to gain access?


Deploy a Gateway in Azure


The process to deploy a gateway in azure, while slightly different from the command line, will still auto deploy a gateway and application, and it will create the same service as we deployed in AWS, but just requires a different command.


Since Azure works on tokens, make sure you are still logged into Azure, but running ‘azure login’ from the console or command line. Follow the process to provide the token to the website to validate your identity.



Once you have validated yourself with the token, and you get the OK response;



Run ‘azurelaunch <0-254> <name>’



When you run it, you will notice a Standby VPC will appear, and shortly after, a gateway coming online in the same way as all others. You should not see a significant difference.




Wait for the gateway to be listed as deployed, before connecting it to the security manager. While you are waiting, Login to the Azure portal and verify the changes being made. How is it different from AWS? How is it the same? Check your management console as well.


When the gateway is transitioning to deployed and available, you will see it appear as both deployed and coming online:



Your gateway should be ready in another minute, wait until you see it as fully deployed:



When your gateway is deployed, but the VPC is still in standby, you are ready to bring it online, using ‘Connect <name of vpc>’ ;



When the Azure gateway is ready, it will have its associated VPC name in the VPC’s Online: column.




Deploy an application to verify the configuration is working, but ensure you specify azure commands for azure based deployments. To deploy an application in an Azure protected VPC, use ‘azapplaunch <azure based VPC>’:




Let the application service come online, and verify it, just like the last gateway.  Review the logs and the smartdashboard policy configuration. How is the policy different for Azure gateway? How are the NAT rules handled for each site?





You can find the IP and name of your online sites using ‘status <name of VPC online>

What do you see in the logs? Probably more drops than accepts. As long as the web sites are working, let’s make this interesting. Take the two IP’s of each VPC and keep them handy, and progress the the next stage.





Make Some Noise


We can virtualize clients, as well as servers in the cloud, and this makes a great way to test and validate how the virtual data center stands up under load, what factors define the capabilities, relative to performance for the live VPC you have running now? Don’t wonder, time to bring on the test clients.


These clients just want a URL to grab, and depending on the extension, it can be anything from IOT style small packets, to big bandwidth hogging apps. These clients are aggressive, do not leave them running too long, just long enough to see what they are doing to your logs and on the systems themselves. What effects do the various clients have?


Hit enter in the console, and run ‘testclient <aname> <IP or hostname of a VPC> <east/west>


When the test client is listed as deployed, check your logs.


Use ‘die <name of test client>’ to shut them down.


For a heavy bandwidth use client, try testclient <name> <IP or hostname of a VPC>/10mb.html <east/west>

(launch one or two for each VPC that is online)


What effect does this client have on the gateway? What level of inspection is being performed under this load?


Check your logs, and don’t forget to stop it with ‘die <name>’


When you are done testing, be sure to shut down your running instances.


Shutting Down


At this point you can take control from the Smartdashboard. Create a VPN, experiment with NAT. Just be sure to shut down the instances when you are done. Take the time to complete the shutdowns, and verify in the web console that everything is gone. For Azure, you need to delete the entire resource from the web GUI.


Take the applications offline with ‘killapp <name>’ perform this for each app deployed.


Next, take the gateways offline using ‘killgw <name>



After it completes, log into Smartdashboard. What do you notice about the policy? How have the NAT rules changed?


Be sure and kill all gateways, apps and clients that are in the deployed state.



Log back into your Azure portal and delete any resource groups:

Select the resource group and delete it, Azure will want you to spell out the name of the resource group before you kill it, not sure why, I guess so we feel sorry for it?


Be sure to select delete.


You should have nothing running except your own controller. You will have to manually delete VPC created in AWS. Everything should be tagged and visible. Do not delete your default VPC, only the ones created through the cpdeploy scripts.


And don’t forget to terminate the management station last, with ‘killmg <name>’.