Check Point API: Merging Management Servers With R80.10

By |Published On: October 27th, 2017|Tags: |

In R80 and above, Check Point has introduced a management API that offers a whole new realm of possibilities for managing policies. This also rendered many traditional tools that we might have considered for doing this type of work obsolete (such as cp_merge).  The availability of the API has introduced a number of options for using scripts or third-party tools to manage and migrate your firewall policy.

Scenario

Suppose that you have two Check Point management servers that you want to merge. One is running R80 and the other is running R77.30. You want to eventually end up with a single management server, but don’t want to have to manually copy everything over. What do you do?

That was the same question I faced a short time ago. I recently needed to perform this type of migration, combining the policies of two management servers in order to migrate the management of our office firewall cluster from a legacy management server to one running R80 in AWS. This gave me a great opportunity to learn how tow to use the R80 API.

While researching the best way to accomplish the migration, I came across a tool released by the Check Point APIs Team called ExportImportPolicyPackage. This is a python script that leverages the R80.10 API to create a tar.gz file that can be imported into another management server.

The first challenge

This tool requires R80.10 across the board. The source management server was running R77.30 and the destination one was running R80 in AWS. In order to use the R80.10-compatible tool, I needed to upgrade everything to R80.10 first. To avoid manipulating production systems, I did all this work in VMs in our lab.

The R77.30 management upgrade was relatively straightforward: use the R80.10 migration tools for Pre-R80 Gaia versions (Check_Point_R80.10_migration_tools_PreR80.Gaia.tgz) to run an upgrade_export of the policy on the R77.30 management server, build a VM with a clean install of R80.10, and import the migrated policy. Easy.

The R80 to R80.10 for the destination management server was a bit more interesting. I was originally thinking I could just do an in-place HFA upgrade to R80.10, but since this was running in AWS, this method isn’t supported (sk118717). If you try to run the HFA upgrade, it looks like this:

Copy to Clipboard

(Full disclosure – I really wanted to see what would happen if I deleted /etc/in-aws, but I didn’t want to end up with an unsupported configuration for our production environment).

To make this more interesting/annoying, there are not currently any published migration tools for migrating a R80 management server from R80 to R80.10. The R80.10 migration in the first column only works on R80.10 systems, the Pre-R80 migration tools only work on R77.30 and older (and not on R80, I tried). I asked for some advice on what to use, and the SecurePlatform/Linux package was suggested, but that only works on R77.30 and older as well.

Merging Management Servers With R80.10

Without a good in-place upgrade option, I built a R80 VM in the lab, imported the production policy, did an HFA upgrade to R80.10, and then ultimately used this system to do the policy migration and produce a final export which I could load into a new R80.10 AWS management server.

The next challenge – getting the API working.  New versions of Check Point actually come with a version of Python, and the ExportImportPolicyPackage looks like it could run locally on a management server due to some of the flags in the help output. However, the version of Python in Gaia was missing some functionality used by the script, so that was not possible.  Running this remotely from my machine was the best option. However, the management API doesn’t allow remote access by default. To enable API access, open SmartConsole and navigate to Manage and Settings -> Blades -> Management API -> Advanced Settings. If this setting is changed, you will need to restart the API by SSHing into the management server and running the api restart command.

Merging Management Servers With R80.10

To confirm that the API is usable and available remotely, run the api status command.  If Accessibility shows “Require all granted” it means that any system can access the API (on R80 this will show “Allow all”).

Copy to Clipboard

At this point, we can test exporting the policy from the administrator workstation running Python. For this case, the command that worked best was this:

Copy to Clipboard

This command exports the access control policy, which was what we were interested in duplicating for this exercise. As the export runs, it will output lists of rules and objects as it is running.  Keep an eye on this while it is running to see if there are any issues. You will see output similar to what is seen below (abbreviated):

Copy to Clipboard

Once this is completed, review the console output for any issues. In our case, everything looked good, so I went ahead and attempted to import the policy to the new management server with the following command:

Copy to Clipboard

Of course, this failed. Looking at the logs for the API, it seemed as if the publish operation was failing. Since the import wasn’t working successfully, I wanted to determine if I could successfully publish anything from the API, even using the CLI on the local management server. It turns out I couldn’t:

Copy to Clipboard
Copy to Clipboard

Even this simple operation wasn’t successful, so something wasn’t right. Digging through fwm.elg, I found a java exception indicating that the publish was failing due to a setting in our management server requiring a name and description for each session.

Merging Management Servers With R80.10

Disabling the “All sessions must have a name and description” option allowed publishing through the API to work successfully, and also for the import through the Python script to run.

Once the import was completed, I noticed a number of caveats that needed manual adjustment.  First – Check Point objects such as the cluster in the object don’t migrate over – a placeholder object was created. To fix this, I simply manually created the cluster object to match what was in the old policy. Most rules migrated over fine, although I did need to manually touch any rule where another non-identically named object with an overlapping IP or subnet was created as the tool does not create objects with overlapping networks or IP addresses. This wasn’t a major problem, as it allowed me to identify these overlaps and use the appropriate group from the original policy.

When reviewing the completed policy, be sure to check the following:

  • Any rule containing an object named export_error or import_error
    Most of these were the result of objects that already existed in the destination policy
  • NAT configs
    A lot of these didn’t transfer consistently (likely due to object duplication) and needed to be manually confirmed
  • VPN tunnels
    We didn’t have many VPN tunnels, but they were all re-configured from scratch
  • Check Point specific config
    Every setting for the new cluster object needed to be manually copied over and verified

Once this is complete, you should be able to cut over the management to the new combined management server.

Hopefully this helps you in the event you face a similar project in the future.

Share with your network!
Get monthly updates from Hurricane Labs
* indicates required

About Hurricane Labs

Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.

For more information, visit www.hurricanelabs.com and follow us on Twitter @hurricanelabs.

managed SOAR services