Author Archives: chouse

Working with Google Cloud Managed Instance Groups

Google Cloud Managed Instance Groups (MIGs) are groups of identical virtual machine instances that serve the same purpose.

Instances are created based on an Instance Template which defines the configuration that all instances will use including image, instance size, network, etc.

MIGs that host services are fronted by a load balancer which distributes client requests across the instances in the group.

MIG instances can also run batch processing applications which do not serve client requests and do not require a load balancer.

MIGs can be configured for autoscaling to increase the number of VM instances in the group based on CPU load or demand.

They can also auto-heal by replacing failed instances. Health checks are used to make sure each instance is responding correctly.

MIGs should be Regional and use VM instances in at least two different zones of a region. Regional MIGs can have up to 2000 instances.


Two different modules authored by Google can be used to create an Instance Template and MIG:

  • Instance template: terraform-google-modules/vm/google//modules/instance_template
  • Multi-version MIG: terraform-google-modules/vm/google//modules/mig_with_percent

To optionally create an Internal HTTP load balancer, use: GoogleCloudPlatform/lb-internal/google

The following examples below create a service account, two instance templates, a MIG, and an Internal HTTP load balancer.


  • A custom image should be created with nginx installed and running at boot.
  • A VPC with a proxy-only subnet is required.
  • The instance template requires a service account.
# Enable IAM API
resource "google_project_service" "project" {
 project = "my-gcp-project-1234"
 service = ""
 disable_on_destroy = false

# Service Account required for the Instance Template module
resource "google_service_account" "sa" {
 project = "my-gcp-project-1234"
 account_id = "sa-mig-test"
 display_name = "Service Account MIG test"
 depends_on = [ google_project_service.project ]

Update project, account_id, and display_name with appropriate values.

Instance Templates

The instance template defines the instance configuration. This includes which network to join, any labels to apply to the instance, the size of the instance, network tags, disks, custom image, etc.

The MIG deployment requires an instance template.

The instance template requires that a source image have already been created.

In this terraform code example, two instance templates are created:

  • “A” template – initial version to use in the MIG
  • “B” template – future upgrade version to use with an optional canary update method

During the initial deployment, each instance template can point to the same custom image for the source_image value. In the future, each instance template should point to a different custom image.

# Instance Template "A"
# Module src:
# Registry:
# Creates google_compute_instance_template
module "instance_template_A" {
 source = "terraform-google-modules/vm/google//modules/instance_template"
 region = "us-central1"
 project_id = "my-gcp-project-1234"
 subnetwork = "us-central-01"
 service_account = {
  email =
  scopes = ["cloud-platform"]

 name_prefix = "nginx-a"
 tags = ["nginx"]
 labels = { mig = "nginx" }
 machine_type = "f1-micro"
 startup_script = "sed -i 's/nginx/'$HOSTNAME'/g' /var/www/html/index.nginx-debian.html"

 source_image_project = "my-gcp-project-1234"
 source_image = "image-nginx"
 disk_size_gb = 10
 disk_type = "pd-balanced"
 preemptible = true

# Instance Template "B"
module "instance_template_B" {
 source = "terraform-google-modules/vm/google//modules/instance_template"
 region = "us-central1"
 project_id = "my-gcp-project-1234"
 subnetwork = "us-central-01"
 service_account = {
  email =
  scopes = ["cloud-platform"]

 name_prefix = "nginx-b"
 tags = ["nginx"]
 labels = { mig = "nginx" }
 machine_type = "f1-micro"
 startup_script = "sed -i 's/nginx/'$HOSTNAME'/g' /var/www/html/index.nginx-debian.html"

 source_image_project = "my-gcp-project-1234"
 source_image = "image-nginx"
 disk_size_gb = 10
 disk_type = "pd-balanced"
 preemptible = true

Update the following with appropriate values:

  • Module name
  • region
  • project_id
  • subnetwork – the VPC subnet to use for instances deployed via the template
  • name_prefix– prefix the name of instance template, it will have a version attached to the name.
    • Be sure to include any specific versioning to indicate what is in the custom image.
    • Lowercase only.
  • tags – any required tags
  • labels – network labels to apply to instances deployed via the template
  • machine_type – machine size to use
  • startup_script – startup script to run on each boot (not just deployment)
  • source_image_project – project where the image resides
  • source_image – image name
  • disk_size_gb – size of the boot disk
  • disk_type – type of boot disk
  • preemptible – if set to true, instances can be pre-empted as needed by Google Cloud.

More instance template module options are available:

Changes to the instance template will result in a new version of the template. The MIG will be modified to use the new version. All MIG instances will be recreated. See the update_policy section of the MIG module definition (below) to control the update behavior.

Managed Instance Group

The MIG creates the set of instances using the same custom image and image template. Instances are customized as usual during first boot.

A custom startup script can run every time the instance starts and configure the VM further. See Overview  |  Compute Engine Documentation  |  Google Cloud

In this Regional MIG terraform example, the initial set of instances are deployed using the “A” template set as the instance_template_initial_version.

The same “A” template is also set for the instance_template_next_version with a value of 0 for the next_version_percent.

In a future canary update, set the instance_template_next_version to the “B” template with an appropriate value for next_version_percent.

# Regional Managed Instance Group with support for canary updates 
# Module src: 
# Registry: 
# Creates google_compute_health_check.http (optional), google_compute_health_check.https (optional), google_compute_health_check.tcp (optional), google_compute_region_autoscaler.autoscaler (optional), google_compute_region_instance_group_manager.mig 

module "mig_nginx" { 
 source = "terraform-google-modules/vm/google//modules/mig_with_percent" 
 project_id = "my-gcp-project-1234" 
 hostname = "mig-nginx" 
 region = "us-central1" 
 target_size = 4
 instance_template_initial_version = module.instance_template_A.self_link 

 instance_template_next_version = module.instance_template_A.self_link 
 next_version_percent = 0 
 //distribution_policy_zones = ["us-central1-a", "us-central1-f"]
 update_policy = [{ # See 
  type = "PROACTIVE" 
  instance_redistribution_type = "PROACTIVE" 
  minimal_action = "REPLACE" 
  max_surge_percent = null 
  max_unavailable_percent = null 
  max_surge_fixed = 4 
  max_unavailable_fixed = null 
  min_ready_sec = 50 
  replacement_method = "SUBSTITUTE" 
 named_ports = [{ 
  name = "web" 
  port = 80 
 health_check = { 
  type = "http" 
  initial_delay_sec = 30 
  check_interval_sec = 30 
  healthy_threshold = 1 
  timeout_sec = 10 
  unhealthy_threshold = 5 
  response = "" 
  proxy_header = "NONE" 
  port = 80 
  request = "" 
  request_path = "/" 
  host = "" 
 autoscaling_enabled = "false" 
 max_replicas = var.max_replicas 
 min_replicas = var.min_replicas 
 cooldown_period = var.cooldown_period 
 autoscaling_cpu = var.autoscaling_cpu 
 autoscaling_metric = var.autoscaling_metric 
 autoscaling_lb = var.autoscaling_lb 
 autoscaling_scale_in_control = var.autoscaling_scale_in_control 

Update the following with appropriate values:

  • Module name
  • project_id
  • hostname – the prefix for provisioned VM names/hostnames. Will have a random set of 4 characters appended to the end.
  • region
  • target_size – number of instances to create in the MIG. Does not need to equal the number of zones in distribution_policy_zones.
  • instance_template_initial_version – template to use for initial deployment
  • instance_template_next_version – template to use for future canary update
  • next_version_percent – percentage of instances in the group (of target_size) that should use the canary update
  • distribution_policy_zones – zone names in the region where VMs should be provisioned.
    • Optional. If not specified, the Google-authored terraform module will automatically select each zone in the region.
      • Example: us-central1 region has 4 zones so each zone will be populated in this field. This directly impacts the update_policy and its max_surge_fixed value.
    • This value cannot be changed later. The module will ignore any changes.
      • The MIG will need to be destroyed and recreated to update the zones to use.
    • More than two zones can be specified.
    • The target_size does not need to match the number of zones specified.
    • See About regional MIGs  |  Compute Engine Documentation  |  Google Cloud .
  • update_policy – specifies how instances should be recreated when a new version of the instance template is available.
    • type set to
      • PROACTIVE will update all instances in a rolling fashion.
        • Leave max_unavailable_fixed as null which results in a value of 0, meaning no live instances can be unavailable.
        • Recommended
      • OPPORTUNISTIC means “only when you manually initiate the update on selected instances or when new instances are created. New instances can be created when you or another service, such as an autoscaler, resizes the MIG. Compute Engine does not actively initiate requests to apply opportunistic updates on existing instances.”
        • Not recommended
    • max_surge_fixed indicates the number of additional instances that are temporarily added to the group during an update.
      • These new instances will use the updated template.
      • Should be greater than or equal to the number of zones in distribution_policy_zones. If there are no zones specified in distribution_policy_zones, as mentioned previously, the Google-authored MIG module will automatically select all the zones in the region.
    • replacement_method can be set to either of the following values:
      • RECREATE instance name is preserved by deleting the old instance and then creating a new one with the same name.
      • SUBSTITUTE will create new instances with new names.
        • Results in a faster upgrade of the MIG – instances are available sooner than using RECREATE.
        • Recommended.
    • See Terraform Registry and Automatically apply VM configuration updates in a MIG  |  Compute Engine Documentation  |  Google Cloud
  • named_ports – set the port name and port number as appropriate
  • health_check – set the check type, port, and request_path as appropriate
  • Autoscaling can also be configured. See Autoscaling groups of instances  |  Compute Engine Documentation  |  Google Cloud

More MIG module options are available:

Changes to the MIG may result in a VMs needing to update. See the update_policy section of the MIG module definition (above) to configure the behavior when updating the MIG members.

Load Balancer

An Internal Load Balancer can make a MIG highly available to internal clients.

module "ilb_nginx" {
 source = "GoogleCloudPlatform/lb-internal/google"
 version = "~4.0"
 project = "my-gcp-project-1234"
 network = module.vpc_central.network_name
 subnetwork = module.vpc_central.subnets["us-central1/central-01-subnet-ilb"].name
 region = "us-central1"
 name = "ilb-nginx"
 ports = ["80"]
 source_tags = ["nginx"]
 target_tags = ["nginx"]

 backends = [{
  group = module.mig_nginx.instance_group
  description = ""
  failover = false

 health_check = {
  type = "http"
  check_interval_sec = 30
  healthy_threshold = 1
  timeout_sec = 10
  unhealthy_threshold = 5
  response = ""
  proxy_header = "NONE"
  port = 80
  request = ""
  request_path = "/"
  host = ""
  enable_log = false
  port_name = "web"

Update the following with appropriate values:

  • Module name
  • project
  • network and subnetwork – the VPC and proxy-only subnet to use
  • region
  • name
  • ports – the port to listen on
  • source_tags and target_tags – network tags to use, should be present on the MIG members via the instance template.
  • backends – points to the MIG
  • health_check – should generally match the MIG healthcheck.

More options are available, : see the module source for and

Be sure to consider any necessary firewall rules, especially if using network tags.

The Google-authored MIG module has create_before_destroy set to true, so a new MIG can replace an existing one as a backend behind the load balancer via a very minimal outage (less than 10 seconds). The new MIG will be created and added as a backend, and then the old MIG will be destroyed.

Day 2 operations

Changing size of MIG

If needed, adjust the target_size value of the MIG module to increase or decrease the number of instances. Adjustments take place right away.

If increasing the number of instances and a new template is in place and the update_policy is OPPORTUNISTIC, the new instances will be deployed using the new template.

Changing the zones to use for a MIG

Cannot be changed after creation. MIG must be destroyed and recreated.

Deleting MIG members

Deleting a MIG member automatically reduces the target number of instances for the MIG. The deleted member is not replaced.

Restarting MIG members

Do not manually restart a MIG instance from within the VM itself. This will cause its healthcheck to fail and the MIG will delete/recreate the VM using the template.

Use the RESTART/REPLACE button in the Cloud Console and choose the Restart option. This affects all instances in the group, but can be limited to only acting against a maximum number at a time (“Maximum unavailable instances”).

The “Replace” option within RESTART/REPLACE will delete and recreate instances using the current template.

Updating MIG instances to a new version

When a new version of the custom image is released, such as when it has been updated with new software, the MIG can be updated in a controlled fashion until all members are running the updated version, without any outage.

The MIG module update_policy setting is very important for this process to ensure there is no outage:

  • max_surge_fixed is the number of additional instances created in the MIG and verified healthy before the old ones are removed.
    • Should be set to greater than or equal to the number of zones in distribution_policy_zones
  • max_unavailable_fixed should be set to null which equals 0: no live instances will be unavailable during the update.

The MIG module has options for two different instance templates in order to support performing a canary update where only a percentage of instances are upgraded to the new version:

  • instance_template_initial_version – template to use for initial deployment
  • instance_template_next_version – template to use for future canary update
  • next_version_percent – percentage of instances in the group (of target_size) that should use the canary update

Initially, both options may point to the same template and 0% is allocated to the “next” version.

If a load balancer is used, newly created instances that are verified healthy will automatically be selected to respond to client requests.

Canary update

To move a percentage of instances to the “next” version via a “canary” update:

  1. Set the instance_template_next_version to point to an instance template which uses an updated custom image
  2. Set the next_version_percent to an appropriate percentage of instances in the group that should use the “next” template.
  3. Make sure update_policy has type set to PROACTIVE – this will cause the change to take effect right away.

When applied via terraform, all instances will be recreated (adhering to the update_policy) but a percentage of instances will be created using the “next” template.

After the canary update has been validated and all instances should be upgraded, see the steps below for a Regular update.

Regular update

To update all instances at once (adhering to the update_policy):

  1. Set both the instance_template_initial_version and instance_template_next_version to point to an instance template which uses an updated custom image
  2. Set the next_version_percent to 0.
  3. Make sure update_policy has type set to PROACTIVE – this will cause the change to take effect right away.

When applied via terraform, all instances will be recreated (adhering to the update_policy).

Fool Minecraft on consoles into connecting to a remote private server

Minecraft on Xbox and other Consoles only support connecting to local LAN private servers or official ones over the Internet. If you want to connect Minecraft on a console to a remote private Minecraft server, you need a method to fool it into thinking the remote server is actually local.

If you host a Minecraft Bedrock server through a hosting service, no one with an Xbox or other console (including you) can join it from the console and there is no ability to type in its IP address and connect manually.

If you host your own Minecraft Bedrock server in your house and you have an Xbox or other console, it will show up for you in the Friends tab in Minecraft on the console, but friends using a console in their house will not be able to see it in their Friends tab, even if you are in the game and try to invite them.

If interested, to learn how to set up your own Bedrock server on Ubuntu linux, check out Minecraft Bedrock Edition – Ubuntu Dedicated Server Guide. If you run this on a server at your house, be sure to review the Port Forwarding section to port-forward UDP:19132 to your internal Minecraft server.

In order to get Minecraft on Xbox or other consoles on the local network to connect to a private Minecraft Bedrock server over the internet, we first need to understand how Minecraft discovers local servers. I have not done packet capture analysis to validate this theory, but it would appear that Minecraft scans the local subnet, making connection attempts against all local hosts on UDP 19132, looking for valid responses.

If we run a service on a host on the local network that listens on UDP 19132 and forwards any packets sent to it out over the internet to the real Minecraft Bedrock server, the console should believe it’s a local Minecraft server and list it in the Friends tab.

The following method uses a Windows laptop connected to the same local network as a console running Minecraft, running a service which listens on UDP 19132 and forwards connection requests to a remote Minecraft Bedrock server over the internet.

To Minecraft on the console, it will show a LAN server in the Friends tab which is actually the laptop, forwarding to the remote Minecraft Bedrock server.

Follow these steps to get your console running Minecraft to connect to a remotely-hosted Bedrock server, whether it’s at a friend’s house, or a hosting service.

If you have the Bedrock server running locally, your remote friend will need to perform these steps, not you.

Download the sudppipe utility which will listen on UDP 19132 and forward requests to the Bedrock server. Many thanks to the developer Luigi Auriemma who has made this method possible.

  1. Open your Downloads folder
  2. Right click on and choose Extract All
  3. Change the path to be C:\sudppipe
  4. Open the Start menu and start typing “command” and then open the Command Prompt app when it shows up in the search
  5. Type the following commands in the Command Prompt
    1. cd \sudppipe
    2. sudppipe.exe 19132 19132
      • Replace with the DNS hostname or IP address of the Minecraft Bedrock server. Leave the default UDP port 19132.
    3. Windows Security Alert may pop up asking if it’s OK for this application to communicate on networks.
      1. Check the boxes for Domain, Private, and Public networks, then click Allow Access
        • This is safe to do because it only applies to your home network and only when the sudppipe program is running.
  6. Leave the command prompt window open with sudppipe running
  7. In Minecraft on the console, go to Play and then select the Friends tab
  8. If everything is working properly, a LAN game should now be listed – this is the Minecraft Bedrock server that sudppipe is providing access to.
  9. Should be able to connect and join the game.
  10. When done, on the laptop in the Command Prompt box, press CTRL+C to quit sudppipe. The console won’t be able to see or communicate with the Bedrock server anymore.
sudppipe.exe example

Next time, just open a Command Prompt again and follow the steps above starting at Step 4 and run the commands again. If savvy enough, you can make a batch file script which performs the steps for you, and then just run the batch file.

This was validated with a Windows 10 laptop and Xbox One console but has not been tested with Nintendo Switch or PlayStation 4.

Attempts were made to use a Chromebook and UDP forwarding apps for Chrome or the linux container but were not successful.

I originally posted this procedure to the MCPE subreddit about a year ago.

Site to Site VPN between Google Cloud and pfSense on VMware at home

I’ve always wanted to set up a Site to Site VPN between a cloud provider and my home network. What follows is a guide inspired by Configure Google Cloud HA VPN with BGP on pfSense but customized for a Google Wi-Fi home network and updated with some pfSense changes that I had to figure out.

Home Network

When we built the house in 2015, I set up a 3-pack of original Google Wi-Fi (not the “Nest” version) to use as my router and access points throughout the house. Google Wi-Fi is great – it’s very easy to get started. Once deployed, it can generally be thought of as “set it and forget it”. However, it doesn’t provide all the bells and whistles that some of the more advanced home routers offer, but this can be a blessing in disguise because there is less to fiddle with and potentially mess up. Most importantly, it delivers a reliable experience for the family.

My home lab is a simple Intel NUC with a dual-core Intel Core i3-6100U 2.3 GHz CPU and 32 GB RAM. It runs a standalone instance of VMware ESXi 7. I run a few VMs when I need to, but nothing “production”.

Site-to-Site VPN with Google Cloud

Since switching to a full-time focus on cloud engineering and architecture, one of the things I’ve always wanted to try is to set up a Site-to-Site IPsec VPN tunnel with BGP between my home and a virtual private cloud (VPC) network to better understand the customer experience for VPN configuration and network management.

As I mentioned earlier, Google Wi-Fi is rather basic and doesn’t offer any VPN capability, but it can do port forwarding, and when combined with a virtual appliance, that’s all we really need.

pfSense overview

Since Google Wi-Fi does not have any VPN capabilities, I intend to use a pfSense virtual appliance in ESXi to act as a router for virtual machine clients on an internal ESXi host-only network. The host-only network will have no physical uplinks so the only way out to the Internet or the private cloud network is through the pfSense router.

pfSense will provide DHCP, DNS, NAT, and routing/default gateway services only to the clients on the internal host-only network.

Because of the way it is designed, no other router can sit between Google Wi-Fi and the internet without some loss of functionality, including mesh networking, and we do not want to disturb the other users of the network (family), so we will create an isolated network with pfSense on the ESXi host.

pfSense VM will have two virtual NICs:

  • NIC1 is connected to the “VM Network” and has Internet access through the home network.
  • NIC2 is connected to the internal isolated “host-only” network which does not have any connectivity to the Internet.
Network diagram showing connectivity between the Internet and pfSense running as a virtual machine straddling two networks in the ESXi host on the Intel NUC.

pfSense installation

Netgate has a comprehensive guide on how to install the pfSense virtual appliance on VMware ESXi.

Following are some installation tips that I found to be helpful:

  • Upload the pfSense ISO to an ESXi datastore – don’t forget to unzip it first.
  • When creating a new VM for pfSense on ESXi 7, select Guest OS family “Other” and Guest OS version “FreeBSD Pre-11 versions (64-bit)”
  • VM Hardware:
    • Set CPU to 2
    • Set Memory to 1 GB
    • Set Hard Disk to 8 GB
      • Make sure the SCSI adapter is LSI Logic Parallel
    • Set Network Adapter 1 to the home/internet network, mark it as Connect
    • Add a second Network Adapter for the host-only network, leave it as E1000, mark it as Connect
    • CD/DVD Drive 1 set to Datastore ISO file and browse for the pfSense ISO, mark is as Connect

Boot the VM off the ISO, accept the defaults and let it reboot.

On first boot, the WAN interface will have a DHCP IP from the home network (Google Wi-Fi assigns in the range) and the internal-facing LAN interface will have a static IP of If this is incorrect, use the “Assign interfaces” menu item in the console to set which NIC corresponds to WAN and LAN appropriately. Use the ESXi configuration page to find the MAC address of each NIC and which network it is connected to in order to configure them appropriately.

Port-forward IPSec ports to pfSense

After pfSense is installed, we need to port-forward the external Internet-facing IPSec ports on the Google Wi-Fi router to the pfSense VM.

Google has recently relocated management of Google Wi-Fi to the Google Home app. Look for the Wi-Fi area, click the “gear” icon in upper right, select “Advanced networking”, and then “Port management”.

Use the “+” button to add a new rule. Scroll through the IPv4 tab to find the new “pfSense” entry and select it. Verify the MAC address shown is the same as the pfSense VM’s WAN NIC connected to the home network (“VM Network”). Add an entry for UDP 500. Repeat for UDP 4500.

Note: It is not possible to configure port forwarding unless the internal target is online. The Google Home app will only show a list of active targets that are connected to the network. If the pfSense host is not present, verify the VM is powered on and connected to the home network.

Port forwarding rules for inbound UDP/500 and UDP/4500 forwarding to the pfSense NIC1 on the home network

By default, pfSense only allows management access through its LAN interface, so the next step is to deploy a Jump VM with a web browser on the host-only network. Use the VM console to access the Jump VM desktop and launch the browser since it will not be reachable on the home network (in case you wanted to RDP). Verify it has a IP. It should also be able to reach the internet but this is not required.

pfSense initial configuration

On the Jump VM, browse to, accept the certificate warning, and log in as admin with password pfsense. Step through the wizard.

Some tips:

  • Set the Hostname and Domain to something different than the rest of the network.
  • Configure WAN interface: Uncheck “Block RFC1918 Private Networks”
  • Set a secure password for admin
  • Select Interfaces | WAN
    • Uncheck “Block bogon networks” if selected
    • Click Save and then Apply

Google Cloud VPN configuration

Use the Google Cloud Console for the following steps:

  • Networking | VPC Networks
    • Create a new VPC network or use an existing one. Should have Dynamic routing mode set to Global.
  • Networking | Hybrid Connectivity | VPN
    • Create a new VPN Connection
      • Classic VPN
      • Select VPC network created earlier
      • Create a new external IP address or use an available one
      • Tunnels – set Remote peer IP address to the home external internet IPv4 address (from home, visit and note the IPv4 address)
      • Generate and save the pre-shared key – it is needed for pfSense.
      • Select Dynamic (BGP) routing option and create a new Cloud Router. Set Google ASN to 65000. Create a new BGP session, set Peer ASN (pfSense) to 65001. Enter Cloud Router BGP IP of, and BGP peer IP (pfSense) of
      • Note the external public IP address of the Cloud VPN.

pfSense IPsec configuration

Use the Jump VM web browser for these steps in the pfSense web interface:

  • System | Advanced | Firewall & NAT tab: Allow APIPA traffic
  • VPN | IPSsec, Add P1
    • Set Remote Gateway to the Google Cloud VPN external public IP recorded previously.
    • Set “My identifier” to be “IP address” and enter the external public IPv4 address of the home network recorded earlier.
    • Enter the Pre-Shared Key generated for the Google Cloud VPN tunnel
      • It may not be possible to paste the key in to the VM console – visit and create a new “Burn after reading” paste with the key and then access the paste from the Jump VM to retrieve the key.
    • Set the Phase 1 Encryption Algorithm to AES256-GCM
    • Set Life Time to 36000
  • Save and apply changes
  • Show P2 entries, Add P2
    • Mode: Routed (VTI)
    • Local network: Address, BGP IP
    • Remote network: Address, BGP IP
    • Protocol: ESP
    • Encryption Algorithms
      • AES, 128 bits
      • AES128-GCM, 128 bits
      • AES192-GCM, Auto
      • AES256-GCM, Auto
    • Hash Algorithms: SHA256
    • PFS key group: 14 (2048 bit)
  • Save, Apply changes
  • Click on Firewall | Rules, select IPsec from along the top, Add a new rule
    • Set Protocol to Any
  • Save rule, Apply changes

pfSense BGP configuration

Go to System | Package Manager, click on Available Packages, search for “frr”. Install “frr”. This will connect out to the Internet to retrieve the packages. Wait for it to complete successfully.

Go to Services | FRR Global/Zebra

  • Global Settings
    • Enable FRR
    • Enter a master password.
    • Set Syslog Logging to enabled and set Package Logging Level to Extended
  • Click on Access Lists along the top
    • Add a new Access List
      • Name: GCP
      • Access List Entries: set Sequence to 0, set Action to Permit, check box for Source Any
      • Click Save
  • Click on Prefix Lists along the top
    • Add a new Prefix List
      • Name: IPv4-any
      • Prefix List Entries: set Sequence to 0, set Action to Permit, check box for Any
      • Click Save
  • Click on BGP along the top
    • Enable BGP Routing
    • Set Local AS to 65001 (GCP Cloud Router was set to 65000)
    • Set Router ID to (GCP Cloud Router was set to
    • Set Hold Time to 30
    • At the bottom, set Networks to Distribute to
    • Click Save
  • Click Neighbors along the top, add a new Neighbor
    • Name/Address:
    • Remote AS: 65000
    • Prefix List Filter: IPv4-any, for both Inbound & Outbound
    • Path Advertise: All Paths to Neighbor
    • Save

Checking status

In pfSense, click on Status | FRR

In the Zebra Routes area, you should see “B>*” entries for subnets in the GCP VPC “via” (BGP IP of GCP Cloud Router)

In the BGP Routes area, should see Networks listed for GCP VPC subnets, with Next Hop of (BGP IP of GCP Cloud Router) and Path of 65000 (GCP Cloud Router ASN)

BGP Neighbors should list as a neighbor with remote AS 65000, local AS 65001 and a number of “accepted prefixes” which are the VPC subnets.

Visit the Cloud VPN area in Google Cloud Console, the VPN Tunnel should show Established, and the BGP session should also show BGO established.

Visit the VPC and click on its Routes. There should be one listed for the on-premise pfSense LAN, via next hop

Validating connectivity

At this point, VMs in GCP should be able to communicate with VMs in the on-premise pfSense LAN network.

Create a GCE instance with no public IP and attach it to the VPC subnet. Make sure firewall rules apply to the instance permit ingress traffic from network and permit the appropriate ports and protocols:

  • icmp
  • TCP 22 for SSH
  • TCP 3389 for RDP

Wrapping up

If things are not connecting, double-check everything, but also be sure to check the logs in pfSense and in GCP Cloud Logging. The most frequent issue I encountered was a mismatch of proposals by not selecting the right ciphers for the tunnel, or not setting my identifier properly. Also consider how firewall rules will impact communication.

Finally, the settings outlined here are obviously not meant for production use. I don’t claim to understand BGP any more than what it took to get pfSense working with Cloud VPN, so some of the settings I recommend could be enhanced and tightened from a security perspective. As always, your mileage may vary.

Changing GCP machine and disk size

Changing a Google Cloud (GCP) Compute Engine (GCE) virtual machine size or disk size are typical “Day 2” activities that an operations team may perform as the needs of the application running in the VM evolve past what was initially specified during deployment.

As a best practice, all infrastructure deployment and modifications should be performed via Infrastructure-as-Code (IaC) where resources are defined using a declarative language such as Terraform and then a deployment process runs to create or update the resource using cloud APIs.

Changing machine size

For a given GCP Terraform google_compute_instance, change the machine_type value to one which meets the cpu/memory requirement:

  • See API machine type names (third-party site)
  • See GCP Terraform provider documentation
    • In the google_compute_instance set allow_stopping_for_update = true to avoid having to manually stop the VM prior to making the update in Terraform. With this argument set, Terraform will stop the instance during terraform apply and then start the instance when complete.

Increasing disk size

See Working with persistent disks  |  Compute Engine Documentation

Disk sizes may only be increased, not decreased.

Google recommends taking a snapshot of a disk prior to increasing its size. The snapshot is for safekeeping in case there is an issue with the overall process so that the data is not lost.

If a smaller size is set, terraform will plan to destroy the disk and create a new one.

  • This can be prevented by setting the lifecycle argument on the google_compute_disk resource causing the plan to fail:
lifecycle {
 prevent_destroy = true

Increasing the size of a disk can be done via Google Cloud Console, gcloud command line, or API/terraform. For IaC purposes, only terraform should be used.

Increasing boot disk size

VMs using public images automatically resize the root partition and file system after you’ve resized the boot disk on the VM and restarted the VM. If you are using an image that does not support this functionality, you must manually resize the root partition and file system.

Working with persistent disks | Compute Engine documentation

If the VM was created in terraform and did not have a boot disk created separately with a specific size, setting a new boot disk size in the google_compute_instance resource will cause terraform to recreate the VM.

VMs should be created in terraform with separate/independently-created boot & data disk google_compute_disk resources in order to safely increase the size of the disks in the future.

Create VMs in terraform with separate/independent boot & data disks.


data "google_compute_image" "debian9" {
 project = "debian-cloud"
 name = "debian-9-stretch-v20211105"

resource "google_compute_disk" "test-np5-boot" {
 project = <project_id>
 name = "test-np5-boot"
 type = "pd-standard"
 zone = "us-central1-a"
 size = 30

 image = data.google_compute_image.debian9.self_link

resource "google_compute_disk" "test-np5-data1" {
 project = <project_id>
 name = "test-np5-data1"
 type = "pd-standard"
 zone = "us-central1-a"
 size = 10

resource "google_compute_instance" "test-np5" {
 name = "test-np5"
 machine_type = "e2-micro"
 zone = "us-central1-a"
 project = <project_id>

 allow_stopping_for_update = true

 boot_disk {
  source =

 attached_disk {
  source =

 network_interface {
  subnetwork = "uscentral1"
  subnetwork_project = shared_vpc_host_project

 metadata = {
  serial-port-logging-enable = true
  serial-port-enable = true

Be sure to specify a specific name for the google_compute_image (as shown) so that the boot disk is not flagged to be recreated when a new version is released.

By default, a boot disk created separately from the VM will still be deleted when the instance is deleted. Set auto-delete = false in the boot_disk section of the google_compute_instance to prevent this behavior.

To increase the size of the boot disk, change the size value for the google_compute_disk called by google_compute_instance boot_disk argument:

resource "google_compute_disk" "test-np5-boot" {
 project = <project_id>
 name = "test-np5-boot"
 type = "pd-standard"
 zone = "us-central1-a"
 size = 40

 image = data.google_compute_image.debian9.self_link

Terraform will update the size of the boot disk. The VM will not be restarted automatically, even if google_compute_instance has allow_stopping_for_update set to true because the change is being made to the google_compute_disk resource, not the VM instance.

Manually restart the VM during a maintenance window. If using a public image, or an image customized from a public image, the OS boot disk and partition should be expanded automatically.

If not, see Resize the file system and partitions.

Adding a new data disk

In terraform, create a new disk using the google_compute_disk resource. Example:

resource "google_compute_disk" "data1" {
 project = <project_id>
 name = "test-np4-data1"
 type = "pd-standard"
 zone = "us-central1-a"
 size = 10

 lifecycle {
  prevent_destroy = true

Modify the terraform VM google_compute_instance resource to include the attached_disk argument which references the google_compute_disk resource.


attached_disk {
 source =

Increasing data disk size

Modify the terraform google_compute_disk data disk size argument:

resource "google_compute_disk" "test-np5-data1" {
 project = <project_id>
 name = "test-np5-data1"
 type = "pd-standard"
 zone = "us-central1-a"
 size = 20

Terraform will update the size of the data disk. The VM will not be restarted automatically, even if google_compute_instance has allow_stopping_for_update set to true because the change is being made to the google_compute_disk resource, not the VM instance.

Modern OSs should automatically detect the capacity change of the data disk. If not, perform a rescan using the method provided by the operating system.

For exact steps to increase the size of a filesystem after increasing the disk size, see Resize the file system and partitions (select “Linux instances” or “Windows instances”).

My avatar background

The avatar that I use online is one I’ve had since 2008, when it was created for me by a marketing firm as part of VMworld 2008 design and branding.

I think personal branding in the public sphere is important. So when given the opportunity, I always use this avatar (or via Gravatar) as my profile picture – except on more professional platforms like LinkedIn where I use a professional photo, or more personal platforms like Facebook.

I like the simplicity of it – the simple solid colors, basic expression. It’s instantly recognizable, and if someone has seen it before, they remember to associate it with me.

The avatar was made as a cartoon version of myself captured from a video where I’m being interviewed about VMware VDM (which eventually became View, and then Horizon View). The interview segment starts with me as the cartoon and then it morphs in to video footage of me speaking about VDI.

My coworker and I were invited to a private customer beta session at the VMware campus on Hillview Ave in Palo Alto in July 2008 to learn about, use, and give feedback on what would eventually become VMware View, and later, Horizon View. After the beta session, I and a few others participated in the video interview about how we were using VDI.

At the time, I worked for a healthcare organization which was really at the forefront of virtual desktops, having deployed them in 2007 as part of a move to a new hospital campus, including a datacenter relocation. Famously, no PCs were purchased for the majority of the building, rather, thin terminals were placed and we went all-in on VDI on VMware.

2008 was already a busy year for press about the organization’s use of VDI – a VMware press release in February discussed the hospital move and drive towards VDI.

An interview and photo shoot featured us on the cover of Network World Magazine May, 2008 issue.

Finally, I presented details of our VDI deployement at a well-attended West Michigan VMware User Group held at the hospital in a large conference room in November, 2008. At the time, much of the automation was developed in-house and worked quite well, so we were wary of switching to a commercial product.

The video interview at VMware ultimately aired as part of the VMworld 2008 conference kickoff at The Venetian Hotel in Las Vegas in August, 2008. VMware View was announced at the conference and released in December, 2008.

I could not attend in person but a review of photos taken at the event show the same style used throughout the conference, with many different faces rendered in this simple cartoon format.

The look was created by Emotive Brand which developed the strategy, messaging and design of the VMworld experience for several years including 2008.

Another version was also developed which featured a different shirt and color background.

I alternated between versions for a time before settling on the yellow-background version.

It’s quite convenient to have a go-to avatar for use when needed. I always enjoy finding new places to use it!