HomeBlogAboutPricingContact🌐 中文
Back to HomeGCP
GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

📑 Table of Contents

GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations GuideGCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

Want to run your programs on GCP but don't know which service to use?

Compute Engine, Cloud Run, GKE... they all sound similar—what's the difference?

This article will walk you through actually operating GCP's three major compute services. From creating your first VM, to deploying Serverless containers, to managing Kubernetes clusters—step-by-step to get you started.

Want to understand GCP's overall architecture first? Please refer to "GCP Complete Guide: From Beginner Concepts to Enterprise Practice."



GCP Compute Service Selection Guide

💡 Key Takeaway: Before getting hands-on, understand the differences between these three services.

VM vs Container vs Serverless Comparison

ServiceTypeWhat You ManageSuitable Scenarios
Compute EngineVMOS, Runtime, ApplicationNeed full control, special software requirements
GKEContainer OrchestrationContainers, Pods, DeploymentsLarge-scale microservices, complex orchestration
Cloud RunServerless ContainerContainer imageAPI services, Web apps, quick deployment

Simple Memory Aid:

Choosing Services Based on Workload

Choose Compute Engine when:

Choose Cloud Run when:

Choose GKE when:

Service Combinations and Hybrid Architecture

In practice, many projects mix these services:

Common Combo 1: Frontend/Backend Separation

Common Combo 2: Microservices Architecture

Common Combo 3: ML Workflow



Compute Engine (VM) Hands-on Tutorial

Compute Engine is GCP's most basic compute service. Like renting a computer in the cloud.

Creating Your First VM Instance

Method 1: Using Cloud Console (Web Interface)

  1. Go to Cloud Console → Compute Engine → VM instances

  2. Click "Create Instance"

  3. Set basic info:

    • Name: my-first-vm
    • Region: asia-east1 (Taiwan)
    • Zone: asia-east1-b
  4. Select machine type (detailed next section)

  5. Select boot disk (detailed next section)

  6. Set firewall:

    • Check "Allow HTTP traffic" (if running web)
    • Check "Allow HTTPS traffic"
  7. Click "Create"

Method 2: Using gcloud CLI

gcloud compute instances create my-first-vm \
  --zone=asia-east1-b \
  --machine-type=e2-medium \
  --image-family=debian-11 \
  --image-project=debian-cloud \
  --boot-disk-size=20GB \
  --tags=http-server,https-server

CLI benefits: can be scripted for easy repetition and version control.

Machine Types and Spec Selection

GCP has many machine series—choosing wrong wastes money.

Machine Series Comparison:

SeriesFeaturesUse CasesPrice
E2Cheapest, shared CPUDev/test, small services💰
N2Balanced, dedicated CPUGeneral production💰💰
N2DAMD processorHigh cost-performance needs💰💰
C2Compute optimizedCPU-intensive work💰💰💰
M2Memory optimizedLarge databases, SAP💰💰💰💰
A2GPU optimizedML training, rendering💰💰💰💰💰

How to Choose?

Custom Machine Type:

If standard specs don't fit your needs, customize vCPU and memory:

gcloud compute instances create custom-vm \
  --custom-cpu=6 \
  --custom-memory=12GB

Boot Disk and Image Settings

Image Selection:

TypeOptionsCost
Public imagesDebian, Ubuntu, CentOSFree
Premium imagesWindows, RHEL, SUSEExtra charge
Custom imagesYour ownStorage cost

Disk Types:

TypeIOPSUse CasesPrice
pd-standard (HDD)LowBackup, cold data$0.04/GB
pd-balanced (SSD)MediumGeneral purpose$0.10/GB
pd-ssd (SSD)HighDatabases, high I/O$0.17/GB
pd-extreme (SSD)Very highHigh-performance databases$0.125/GB

Recommendations:

Network and Firewall Configuration

Default Network Settings:

Each VM gets by default:

Firewall Rule Setup:

# Allow HTTP
gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 \
  --target-tags=http-server

# Allow HTTPS
gcloud compute firewall-rules create allow-https \
  --allow=tcp:443 \
  --target-tags=https-server

# Allow SSH from specific IP
gcloud compute firewall-rules create allow-ssh-from-office \
  --allow=tcp:22 \
  --source-ranges=203.0.113.0/24

Security Recommendations:

SSH Connection and Basic Management

Connection Methods:

1. Cloud Console Built-in SSH

Simplest way—click and connect.

2. gcloud CLI

gcloud compute ssh my-first-vm --zone=asia-east1-b

3. Standard SSH Client

# First set up SSH Key
gcloud compute config-ssh

# Then use regular SSH
ssh my-first-vm.asia-east1-b.your-project

Common Management Commands:

# List all VMs
gcloud compute instances list

# Stop VM (stops vCPU billing, but disk still charges)
gcloud compute instances stop my-first-vm --zone=asia-east1-b

# Start VM
gcloud compute instances start my-first-vm --zone=asia-east1-b

# Delete VM
gcloud compute instances delete my-first-vm --zone=asia-east1-b

For cost details, see "GCP Pricing and Cost Calculation Complete Guide."



Cloud Run Container Deployment Tutorial

Cloud Run is GCP's Serverless container service. Just give it a container—everything else is handled.

How Cloud Run Works

Core Concepts:

  1. You package a container image
  2. Deploy to Cloud Run
  3. Cloud Run automatically handles:
    • Starting containers
    • Load balancing
    • Auto-scaling (0 to N instances)
    • HTTPS certificates
    • Custom domains

Billing Method:

Limitations:

Deploying Services from Container Registry

Step 1: Prepare Your Application

Using Node.js as example, create index.js:

const express = require('express');
const app = express();
const port = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send('Hello from Cloud Run!');
});

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

Step 2: Create Dockerfile

FROM node:18-slim
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
CMD ["node", "index.js"]

Step 3: Build and Push to Artifact Registry

# Configure Docker authentication
gcloud auth configure-docker asia-east1-docker.pkg.dev

# Build image
docker build -t asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1 .

# Push
docker push asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1

Step 4: Deploy to Cloud Run

gcloud run deploy my-service \
  --image=asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1 \
  --region=asia-east1 \
  --platform=managed \
  --allow-unauthenticated

After deployment, you'll get an HTTPS URL.

Auto-Scaling and Traffic Management

Auto-Scaling Settings:

gcloud run deploy my-service \
  --min-instances=0 \    # Min instances (0 = can scale to 0)
  --max-instances=100 \  # Max instances
  --concurrency=80       # Max concurrent requests per instance

Traffic Split (Multi-Version Deployment):

# Deploy new version without traffic
gcloud run deploy my-service \
  --image=my-app:v2 \
  --no-traffic

# Gradually shift traffic
gcloud run services update-traffic my-service \
  --to-revisions=my-service-v2=50,my-service-v1=50

# All traffic to new version
gcloud run services update-traffic my-service \
  --to-latest

Custom Domain and HTTPS Setup

Setting Up Custom Domain:

  1. Go to Cloud Run → Select service → Manage Custom Domains
  2. Click "Add Mapping"
  3. Enter your domain (e.g., api.example.com)
  4. Follow instructions to set up DNS

DNS Setup:

HTTPS:

Environment Variables and Secret Management

Setting Environment Variables:

gcloud run deploy my-service \
  --set-env-vars=DATABASE_URL=xxx,API_KEY=yyy

Using Secret Manager:

# First create Secret
echo -n "my-secret-value" | gcloud secrets create my-secret --data-file=-

# Mount Secret during deployment
gcloud run deploy my-service \
  --set-secrets=API_KEY=my-secret:latest

Benefits:



GKE (Google Kubernetes Engine) Introduction

If your service scale is large and complex enough, GKE is the most powerful choice.

Creating and Configuring GKE Clusters

Using Console:

  1. Go to GKE → Create Cluster
  2. Choose mode: Autopilot or Standard (explained next section)
  3. Set name and region
  4. Configure node pools (Standard mode)
  5. Create

Using gcloud:

# Autopilot mode
gcloud container clusters create-auto my-cluster \
  --region=asia-east1

# Standard mode
gcloud container clusters create my-cluster \
  --zone=asia-east1-b \
  --num-nodes=3 \
  --machine-type=e2-medium

Get Cluster Credentials:

gcloud container clusters get-credentials my-cluster \
  --region=asia-east1

After running, you can use kubectl to operate the cluster.

Autopilot vs Standard Mode Comparison

ItemAutopilotStandard
Node ManagementGoogle managesYou manage
Billing UnitPod resourcesNode resources
Configuration FlexibilityLessFully customizable
SecurityHardened by defaultSelf-configured
ComplexityLowHigh
Suitable ForMost usersNeed special configurations

Recommendations:

Workload Deployment Basics

Deploying a Simple Application:

Create deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: asia-east1-docker.pkg.dev/PROJECT_ID/REPO/my-app:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

Deploy:

kubectl apply -f deployment.yaml

Common Commands:

# View Deployments
kubectl get deployments

# View Pods
kubectl get pods

# View Pod logs
kubectl logs <pod-name>

# Enter Pod
kubectl exec -it <pod-name> -- /bin/sh

# Scale replicas
kubectl scale deployment my-app --replicas=5

Service Exposure and Load Balancing

Create Service:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080

Service Types:

TypePurposeExternal Access
ClusterIPCluster internal communicationNo
NodePortOpen node portYes (rarely used)
LoadBalancerGCP load balancerYes

Ingress (Advanced):

To manage routing for multiple services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80


Storage Service Integration

Compute services often need storage.

Cloud Storage Mounting and Usage

Accessing Cloud Storage from VM:

# Install gsutil (usually pre-installed)
# Upload file
gsutil cp local-file.txt gs://my-bucket/

# Download file
gsutil cp gs://my-bucket/file.txt ./

# Sync folder
gsutil rsync -r ./local-folder gs://my-bucket/folder

Accessing from Cloud Run:

const {Storage} = require('@google-cloud/storage');
const storage = new Storage();

async function uploadFile() {
  await storage.bucket('my-bucket').upload('local-file.txt');
}

Persistent Disk Configuration

Add Disk to VM:

# Create disk
gcloud compute disks create my-disk \
  --size=100GB \
  --type=pd-ssd \
  --zone=asia-east1-b

# Attach to VM
gcloud compute instances attach-disk my-vm \
  --disk=my-disk \
  --zone=asia-east1-b

Mount Inside VM:

# After SSH into VM
sudo mkfs.ext4 -m 0 -F /dev/sdb
sudo mkdir /mnt/data
sudo mount /dev/sdb /mnt/data

# Set auto-mount on boot
echo '/dev/sdb /mnt/data ext4 defaults 0 0' | sudo tee -a /etc/fstab

Filestore (NFS) Use Cases

Suitable Scenarios:

Create Filestore:

gcloud filestore instances create my-filestore \
  --zone=asia-east1-b \
  --tier=BASIC_HDD \
  --file-share=name=vol1,capacity=1TB \
  --network=name=default

Mount on VM:

sudo apt-get install nfs-common
sudo mkdir /mnt/filestore
sudo mount 10.0.0.2:/vol1 /mnt/filestore


Common Issues and Best Practices

Practical problems and solutions commonly encountered.

Performance Tuning Recommendations

Compute Engine:

Cloud Run:

GKE:

Cost Control Techniques

Compute Engine:

Cloud Run:

GKE:

Monitoring and Logging Setup

Cloud Monitoring:

All GCP service metrics automatically go to Cloud Monitoring.

Key Metrics:

Cloud Logging:

# View VM logs
gcloud logging read "resource.type=gce_instance"

# View Cloud Run logs
gcloud logging read "resource.type=cloud_run_revision"

# View GKE logs
gcloud logging read "resource.type=k8s_container"

Setting Up Alerts:

  1. Go to Cloud Monitoring → Alerting
  2. Create Alert Policy
  3. Select metrics and conditions
  4. Set notification channels (Email, Slack, PagerDuty)

For security settings, see "GCP Security and Cloud Armor Protection Complete Guide."



Need a Second Opinion on Architecture Design?

Good architecture can save several times the operational costs.

Schedule Architecture Consultation and let us review your cloud architecture together.

CloudSwap's Architecture Consulting Services:



Conclusion: Building Your GCP Compute Architecture

After this tutorial, you should know how to choose and use GCP's compute services.

Quick Recap:

NeedChoiceReason
Need full controlCompute EngineCan install any software
Want it easyCloud RunDon't manage infrastructure
Large-scale microservicesGKEPowerful orchestration
Unstable trafficCloud RunCan scale to 0
Need GPUCompute EngineSupports NVIDIA GPU
Complex network needsGKEFine-grained network control

Next Step Recommendations:

  1. For new projects, start with Cloud Run
  2. If need full control, use Compute Engine
  3. If services exceed 10, consider GKE
  4. Mixed use is normal—don't force everything into one type

Hands-on is the best way to learn. Open a test project and run through all the examples in this tutorial!



Further Reading



Image Descriptions






References

  1. Google Cloud, "Compute Engine Documentation" (2024)
  2. Google Cloud, "Cloud Run Documentation" (2024)
  3. Google Cloud, "Google Kubernetes Engine Documentation" (2024)
  4. Google Cloud, "Cloud Storage Documentation" (2024)
  5. Google Cloud, "Best Practices for Operating Containers" (2024)

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

GCPAWSKubernetesDocker
Previous
GCP vs AWS Complete Cloud Platform Comparison (2025): Features, Pricing, Use Cases Analysis
Next
GCP Security & Cloud Armor Complete Guide: Building a Secure Cloud Architecture