HomeBlogAboutPricingContact🌐 δΈ­ζ–‡
← Back to HomeAWS Lambda
AWS Lambda + EventBridge Event-Driven Architecture Practical Tutorial

AWS Lambda + EventBridge Event-Driven Architecture Practical Tutorial

πŸ“‘ Table of Contents

Introduction: Why Event-Driven Is the Modern Architecture Trend

πŸ’‘ Key Takeaway: Does your system work like this?

Service A directly calls Service B, Service B calls Service C.

One service goes down, the entire chain goes down.

This is the problem with traditional "synchronous call" architecture. Services are too tightly coupledβ€”one change affects everything.

Event-driven architecture (Event-Driven Architecture) is completely different.

Services don't call each other directly; they communicate through "events." What happened? Whoever is interested handles it. Services are loosely coupled, can scale independently, and deploy independently.

This article will guide you through building event-driven systems with Lambda + EventBridge.

If you're not familiar with Lambda basics, consider reading AWS Lambda Complete Guide first.

Illustration 1: Event-Driven vs Traditional Architecture ComparisonIllustration 1: Event-Driven vs Traditional Architecture Comparison


Event-Driven Architecture Concepts

Before implementation, understand the core concepts.

What is Event-Driven

The core of event-driven architecture is "events."

Events are facts that have already happened. For example:

When events occur, interested services are notified and process them.

This differs from traditional "imperative" calls:

Differences from Traditional Architecture

FeatureTraditional SynchronousEvent-Driven
Service CouplingTight (direct calls)Loose (through events)
Failure ImpactCascading failuresIsolated failures
ScalabilitySynchronized scalingIndependent scaling
Response TimeSynchronous waitingAsynchronous processing
Adding FeaturesNeed to modify callerJust subscribe to events

Benefits and Use Cases

Benefits:

Suitable Scenarios:

Unsuitable Scenarios:



AWS EventBridge Introduction

EventBridge is AWS's Serverless event bus service.

It's the core component of event-driven architecture.

Event Bus Concept

Event Bus is the "hub" for events.

All events are sent to Event Bus, then routed to targets based on rules.

AWS provides three types of Event Bus:

Event Rules

Rules define "which events" route to "which targets."

A Rule contains:

For example: "When S3 has a new file upload, trigger Lambda to process"

Event Patterns

Event Patterns define filter conditions in JSON format.

{
  "source": ["aws.s3"],
  "detail-type": ["Object Created"],
  "detail": {
    "bucket": {
      "name": ["my-bucket"]
    }
  }
}

This Pattern only matches object creation events from my-bucket.

Patterns support multiple matching methods:



Lambda Event Trigger Methods

Lambda supports multiple event trigger methods. Understanding the differences is important.

Synchronous vs Asynchronous Invocation

Synchronous Invocation:

Asynchronous Invocation:

Event Source Mapping Explained

Event Source Mapping is a special trigger method.

Lambda service actively polls events from data sources, rather than passively receiving them.

Supported sources:

Characteristics of this method:

Batch Size and Batch Window Settings

Two key parameters for Event Source Mapping:

Batch Size: How many events to process at once

Batch Window: Maximum seconds to wait collecting events

Selection Recommendations:

To understand Batch Size's impact on costs, see AWS Lambda Pricing Complete Guide.


Not sure whether to use sync or async? Book architecture consultation and let experts help you choose.



Implementation Tutorial

Let's look at three common use cases.

Scenario 1: Scheduled Tasks (Cron)

Execute data backup every day at 3 AM.

Step 1: Create Lambda Function

import json
from datetime import datetime

def lambda_handler(event, context):
    print(f"Backup started at {datetime.now()}")

    # Execute backup logic
    backup_result = perform_backup()

    print(f"Backup completed: {backup_result}")

    return {
        "status": "success",
        "timestamp": str(datetime.now())
    }

def perform_backup():
    # Actual backup logic
    return "backup_2024_01_15.tar.gz"

Step 2: Create EventBridge Schedule Rule

Go to EventBridge β†’ Rules β†’ Create rule:

Cron Expression Explanation:

cron(minute hour day month weekday year)
cron(0 3 * * ? *)  = Daily at 03:00
cron(0 */2 * * ? *)  = Every 2 hours
cron(0 9 ? * MON *)  = Every Monday at 09:00

Scenario 2: S3 β†’ EventBridge β†’ Lambda

Automatically process files when uploaded to S3.

Step 1: Enable S3 EventBridge Notifications

  1. Go to S3 bucket settings
  2. Properties β†’ Event notifications
  3. Enable "Amazon EventBridge"

Step 2: Create EventBridge Rule

Event Pattern:

{
  "source": ["aws.s3"],
  "detail-type": ["Object Created"],
  "detail": {
    "bucket": {
      "name": ["my-upload-bucket"]
    },
    "object": {
      "key": [{
        "prefix": "uploads/"
      }]
    }
  }
}

Step 3: Lambda Processing Function

import json
import boto3

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # Get S3 info from EventBridge event
    detail = event['detail']
    bucket = detail['bucket']['name']
    key = detail['object']['key']

    print(f"Processing file: s3://{bucket}/{key}")

    # Read file
    response = s3.get_object(Bucket=bucket, Key=key)
    content = response['Body'].read()

    # Process file (e.g., format conversion, content analysis)
    result = process_file(content)

    return {"status": "processed", "file": key}

Scenario 3: Custom Events (PutEvents)

Application sends custom events.

Send Events (Python SDK):

import boto3
import json

eventbridge = boto3.client('events')

def send_order_created_event(order):
    response = eventbridge.put_events(
        Entries=[
            {
                'Source': 'myapp.orders',
                'DetailType': 'Order Created',
                'Detail': json.dumps({
                    'orderId': order['id'],
                    'customerId': order['customer_id'],
                    'amount': order['amount'],
                    'items': order['items']
                }),
                'EventBusName': 'my-custom-bus'
            }
        ]
    )
    return response

Subscribe to Events (EventBridge Rule):

{
  "source": ["myapp.orders"],
  "detail-type": ["Order Created"],
  "detail": {
    "amount": [{
      "numeric": [">=", 1000]
    }]
  }
}

This rule only processes orders with amount >= 1000.

If you want to manage these settings with Infrastructure as Code, see Terraform AWS Lambda Deployment Complete Tutorial.

Illustration 2: EventBridge Event Routing Flow DiagramIllustration 2: EventBridge Event Routing Flow Diagram


Advanced Event Source Mapping

Event Source Mapping is suitable for processing streaming data.

SQS Integration (Standard vs FIFO)

Standard Queue:

FIFO Queue:

Lambda Configuration:

# Lambda Handler triggered by SQS
def lambda_handler(event, context):
    for record in event['Records']:
        body = json.loads(record['body'])
        message_id = record['messageId']

        try:
            process_message(body)
        except Exception as e:
            # Processing failed, message returns to queue for retry
            print(f"Error processing {message_id}: {e}")
            raise

    return {"processed": len(event['Records'])}

DynamoDB Streams

Listen to DynamoDB data change events.

Enable Streams:

  1. Go to DynamoDB table settings
  2. Exports and streams β†’ DynamoDB Streams
  3. Select view type (NEW_IMAGE, OLD_IMAGE, BOTH, KEYS_ONLY)

Lambda Processing:

def lambda_handler(event, context):
    for record in event['Records']:
        event_name = record['eventName']  # INSERT, MODIFY, REMOVE

        if event_name == 'INSERT':
            new_item = record['dynamodb']['NewImage']
            handle_new_item(new_item)

        elif event_name == 'MODIFY':
            old_item = record['dynamodb']['OldImage']
            new_item = record['dynamodb']['NewImage']
            handle_update(old_item, new_item)

        elif event_name == 'REMOVE':
            old_item = record['dynamodb']['OldImage']
            handle_delete(old_item)

    return {"processed": len(event['Records'])}

Kinesis Data Streams

Process real-time streaming data.

Features:

Lambda Configuration Considerations:


Event-driven architecture design is complex? SQS, Kinesis, DynamoDB Streams each have suitable scenarios.

Book architecture consultation and let us design the optimal event flow for you.



Asynchronous Processing and Error Handling

Error handling in event-driven systems differs from synchronous systems.

Lambda Destinations

Lambda Destinations let you specify handling targets for success/failure.

Configuration:

  1. Go to Lambda function settings
  2. Asynchronous invocation β†’ Destinations
  3. On success: SQS, SNS, EventBridge, another Lambda
  4. On failure: SQS, SNS, EventBridge, another Lambda

Use Cases:

DLQ (Dead Letter Queue)

Failed events need somewhere to go.

DLQ captures all events that failed after retries, allowing you to:

Configure DLQ:

# Using SAM/CloudFormation
Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      DeadLetterQueue:
        Type: SQS
        TargetArn: !GetAtt DeadLetterQueue.Arn

  DeadLetterQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: my-function-dlq

For more error handling details, see AWS Lambda Error Handling Complete Guide.

Retry Mechanism Configuration

Default Retries for Asynchronous Invocation:

Custom Configuration:

# Using AWS CLI to configure
aws lambda put-function-event-invoke-config \
    --function-name my-function \
    --maximum-retry-attempts 1 \
    --maximum-event-age-in-seconds 3600

Event Source Mapping Retries:



Best Practices

Building robust event-driven systems requires following certain principles.

Idempotency Design

Events may be processed multiple times (network retransmission, retry mechanism).

Idempotency: Processing the same event multiple times yields the same result as processing once.

Implementation:

import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('processed-events')

def lambda_handler(event, context):
    event_id = event['id']

    # Check if already processed
    try:
        table.put_item(
            Item={'eventId': event_id, 'processedAt': datetime.now().isoformat()},
            ConditionExpression='attribute_not_exists(eventId)'
        )
    except dynamodb.meta.client.exceptions.ConditionalCheckFailedException:
        print(f"Event {event_id} already processed, skipping")
        return {"status": "skipped"}

    # Process event
    result = process_event(event)

    return {"status": "processed", "result": result}

Event Version Management

Event formats evolve over time.

Recommended Approach:

{
  "version": "1.0",
  "source": "myapp.orders",
  "detail-type": "Order Created",
  "detail": {
    "orderId": "12345",
    "version": "v2",
    "data": { ... }
  }
}

Monitoring and Tracing

Key Metrics:

Configure CloudWatch Alarms:

Illustration 3: Event-Driven System Monitoring DashboardIllustration 3: Event-Driven System Monitoring Dashboard


FAQ

What's the difference between EventBridge and SNS/SQS?

EventBridge focuses on event routing and filtering, supporting complex event pattern matching; SNS is a publish/subscribe service suitable for simple message broadcasting; SQS is a message queue suitable for decoupling and traffic smoothing. The three can be combined: EventBridge routes events to SNS for broadcasting, or to SQS for buffered processing.

How to choose Batch Size for Event Source Mapping?

Decide based on single event processing time and overall latency requirements. If single processing takes 100ms, Batch Size 100 may cause 10-second latency. Recommend starting small (10-50), monitoring performance before adjusting. Increase Batch Size when cost-sensitive to reduce invocation count.

How to ensure events are not lost?

Use DLQ to capture failed events, set appropriate retry mechanisms, implement idempotent processing to support safe retries. For critical events, consider storing events in persistent storage (S3, DynamoDB) before processing.

Can EventBridge rules work cross-Region?

Yes. Using EventBridge's cross-Region event feature, you can route events to Event Buses in other Regions. This is suitable for multi-Region deployed applications or disaster recovery scenarios.



Conclusion: Embracing the Event-Driven Future

Event-driven architecture is not just a technical choice, but a mindset shift.

From "which service should call which service" to "what happened, who needs to know."

This thinking makes systems more resilient, scalable, and maintainable.

Key Points Recap:

  1. EventBridge is the core of event routing
  2. Event Source Mapping is suitable for streaming data
  3. Idempotency design is the foundation of robust systems
  4. Monitoring and DLQ ensure events are not lost

If you need to process events at the CDN level, see Lambda@Edge Edge Computing to execute lightweight logic at global edge locations.

Next Steps:



Need Professional Event-Driven Architecture Planning?

If you're:

Book architecture consultation, we'll respond within 24 hours.

Proper event architecture significantly improves system resilience and maintainability.



References

  1. AWS Official Documentation: Amazon EventBridge User Guide
  2. AWS Official Documentation: Lambda Event Source Mappings
  3. AWS Official Documentation: Lambda Destinations
  4. AWS Blog: Building event-driven architectures on AWS
  5. AWS Well-Architected: Event-driven architecture

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

AWS LambdaAWSKubernetes
← Previous
LLM Agent Application Guide: From Principles to Enterprise Development [2026 Update]
Next β†’
AWS Lambda Error Handling Complete Guide: 502, 503, 504 Error Solutions