swift-aws-lambda-runtime
is an important library for the Swift on Server ecosystem. The initial API was written before
async/await was introduced to Swift. When async/await was introduced, shims were added to bridge between the underlying
SwiftNIO EventLoop
interfaces and async/await. However, just like gRPC-swift
and postgres-nio
, we now want to
shift to solely using async/await instead of EventLoop
interfaces. For this, large parts of the current API have to be
reconsidered.
Versions:
- v1 (2024-08-07): Initial version
- v1.1:
- Remove the
reportError(_:)
method fromLambdaResponseStreamWriter
and instead make thehandle(...)
method ofStreamingLambdaHandler
throwing. - Remove the
addBackgroundTask(_:)
method fromLambdaContext
due to structured concurrency concerns and introduce theLambdaWithBackgroundProcessingHandler
protocol as a solution.- Introduce
LambdaHandlerAdapter
, which adapts handlers conforming toLambdaHandler
withLambdaWithBackgroundProcessingHandler
. - Update
LambdaCodableAdapter
to now be generic over any handler conforming toLambdaWithBackgroundProcessingHandler
instead ofLambdaHandler
.
- Introduce
- Remove the
- v1.2:
- Remove
~Copyable
fromLambdaResponseStreamWriter
andLambdaResponseWriter
. Instead throw an error whenfinish()
is called multiple times or whenwrite
/writeAndFinish
is called afterfinish()
.
- Remove
The current API extensively uses the EventLoop
family of interfaces from SwiftNIO in many areas. To use these
interfaces correctly though, it requires developers to exercise great care and understand the various transform methods
that are used to work with EventLoop
s and EventLoopFuture
s. This results in a lot of cognitive complexity and makes
the code in the current API hard to reason about and maintain. For these reasons, the overarching trend in the Swift on
Server ecosystem is to shift to newer, more readable, Swift concurrency constructs and de-couple from SwiftNIO's
EventLoop
interfaces.
A Lambda function can currently be implemented through conformance to the various handler protocols defined in
AWSLambdaRuntimeCore/LambdaHandler
. Each of these protocols have an extension which implements a static func main()
.
This allows users to annotate their LambdaHandler
conforming object with @main
. The static func main()
calls the
internal Lambda.run()
function, which starts the Lambda function. Since the Lambda.run()
method is internal, users
cannot override the default implementation. This has proven challenging for users who want to
set up global properties before the Lambda starts-up.
Setting up global properties is required to customize the Swift Logging, Metric and Tracing backend.
The SimpleLambdaHandler
protocol provides a quick and easy way to implement a basic Lambda function. It only requires
an implementation of the handle
function where the business logic of the Lambda function can be written.
SimpleLambdaHandler
is perfectly sufficient for small use-cases as the user does not need to spend much time looking
into the library.
However, SimpleLambdaHandler
cannot be used when services such as a database client need to be initialized before the
Lambda runtime starts and then also gracefully shutdown prior to the runtime terminating. This is because the only way
to register termination logic is through the LambdaInitializationContext
(containing a field
terminator: LambdaTerminator
) which is created and used internally within LambdaRuntime
and never exposed through
SimpleLambdaHandler
. For such use-cases, other handler protocols like LambdaHandler
must be used. LambdaHandler
exposes a context
argument of type LambdaInitializationContext
through its initializer. Within the initializer,
required services can be initialized and their graceful shutdown logic can be registered with the
context.terminator.register
function.
Yet, LambdaHandler
is quite cumbersome to use in such use-cases as users have to deviate from the established norms of
the Swift on Server ecosystem in order to cleanly manage the lifecycle of the services intended to be used. This is
because the convenient swift-service-lifecycle
v2 library — which is commonly used for cleanly managing the lifecycles
of required services and widely supported by many libraries — cannot be used in a structured concurrency manner.
The Lambda runtime can only be started using the internal Lambda.run()
function. This function is called by the
main()
function defined by the LambdaHandler
protocol, preventing users from injecting initialized services into the
runtime prior to it starting. As shown below, this forces users to use an unstructured concurrency approach and
manually initialize services, leading to the issue of the user then perhaps forgetting to gracefully shutdown the
initialized services:
struct MyLambda: LambdaHandler {
let pgClient: PostgresClient
init(context: AWSLambdaRuntimeCore.LambdaInitializationContext) async throws {
/// Instantiate service
let client = PostgresClient(configuration: ...)
/// Unstructured concurrency to initialize the service
let pgTask = Task {
await client.run()
}
/// Store the client in `self` so that it can be used in `handle(...)`
self.pgClient = client
/// !!! Must remember to explicitly register termination logic for PostgresClient !!!
context.terminator.register(
name: "PostgreSQL Client",
handler: { eventLoop in
pgTask.cancel()
return eventLoop.makeFutureWithTask {
await pgTask.value
}
}
)
}
func handle(_ event: Event, context: LambdaContext) async throws -> Output {
/// Use the initialized service stored in `self.pgClient`
try await self.pgClient.query(...)
}
}
In the current API, there are extensions and Codable wrapper classes for decoding events and encoding computed responses
for each different handler protocol and for both String
and JSON
formats. This has resulted in a lot of
boilerplate code which can very easily be made generic and simplified in v2.
In April 2023 AWS introduced support for response streaming in Lambda. The current API does not support streaming. For v2 we want to change this.
In May AWS described in a blog post that you can run background tasks in Lambda until the runtime asks for more work from the control plane. We want to support this by adding new API that allows background processing, even after the response has been returned.
Large parts of Lambda
, LambdaHandler
, and LambdaRuntime
will be re-written to use async/await constructs in place
of the EventLoop
family of interfaces.
- Instead of conforming to a handler protocol, users can now create a
LambdaRuntime
by passing in a handler closure. LambdaRuntime
conforms toServiceLifecycle.Service
by implementing arun()
method that contains initialization and graceful shutdown logic.- This allows the lifecycle of the
LambdaRuntime
to be managed withswift-service-lifecycle
alongside and in the same way the lifecycles of the required services are managed, e.g.try await ServiceGroup(services: [postgresClient, ..., lambdaRuntime], ...).run()
. - Dependencies can now be injected into
LambdaRuntime
. Withswift-service-lifecycle
, services will be initialized together withLambdaRuntime
. - The required services can then be used within the handler in a structured concurrency manner.
swift-service-lifecycle
takes care of listening for termination signals and terminating the services as well as theLambdaRuntime
in correct order. LambdaTerminator
can now be eliminated because its role is replaced withswift-service-lifecycle
. The termination logic of the Lambda function will be implemented in the conformingrun()
function ofLambdaRuntime
.
With this, the earlier code snippet can be replaced with something much easier to read, maintain, and debug:
/// Instantiate services
let postgresClient = PostgresClient()
/// Instantiate LambdaRuntime with a closure handler implementing the business logic of the Lambda function
let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in
/// Use initialized service within the handler
try await postgresClient.query(...)
}
/// Use ServiceLifecycle to manage the initialization and termination
/// of the services as well as the LambdaRuntime
let serviceGroup = ServiceGroup(
services: [postgresClient, runtime],
configuration: .init(gracefulShutdownSignals: [.sigterm]),
logger: logger
)
try await serviceGroup.run()
A detailed explanation is provided in the Codable Support section. In short, much of the boilerplate code defined
for each handler protocol in Lambda+Codable
and Lambda+String
will be replaced with a single LambdaCodableAdapter
struct.
This adapter struct is generic over (1) any handler conforming to a new handler protocol
LambdaWithBackgroundProcessingHandler
, (2) the user-specified input and output types, and (3) any decoder and encoder
conforming to protocols LambdaEventDecoder
and LambdaOutputDecoder
. The adapter will wrap the underlying handler
with encoding/decoding logic.
Below are explanations for all types that we want to use in AWS Lambda Runtime v2.
We will introduce a new LambdaResponseStreamWriter
protocol. It is used in the new StreamingLambdaHandler
(defined
below), which is the new base protocol for the LambdaRuntime
(defined below as well).
/// A writer object to write the Lambda response stream into
public protocol LambdaResponseStreamWriter {
/// Write a response part into the stream. The HTTP response is started lazily before the first call to `write(_:)`.
/// Bytes written to the writer are streamed continually.
func write(_ buffer: ByteBuffer) async throws
/// End the response stream and the underlying HTTP response.
func finish() async throws
/// Write a response part into the stream and end the response stream as well as the underlying HTTP response.
func writeAndFinish(_ buffer: ByteBuffer) async throws
}
If the user does not call finish()
, the library will automatically finish the stream after the last write
.
Appropriate errors will be thrown if finish()
is called multiple times, or if write
/writeAndFinish
is called after
finish()
.
LambdaContext
will be largely unchanged, but the eventLoop
property will be removed. The allocator
property of
type ByteBufferAllocator
will also be removed because (1), we generally want to reduce the number of SwiftNIO types
exposed in the API, and (2), ByteBufferAllocator
does not optimize the allocation strategies. The common pattern
observed across many libraries is to re-use existing ByteBuffer
s as much as possible. This is also what we do for the
LambdaCodableAdapter
(explained in the Codable Support section) implementation.
/// A context object passed as part of an invocation in LambdaHandler handle functions.
public struct LambdaContext: Sendable {
/// The request ID, which identifies the request that triggered the function invocation.
public var requestID: String { get }
/// The AWS X-Ray tracing header.
public var traceID: String { get }
/// The ARN of the Lambda function, version, or alias that's specified in the invocation.
public var invokedFunctionARN: String { get }
/// The timestamp that the function times out.
public var deadline: DispatchWallTime { get }
/// For invocations from the AWS Mobile SDK, data about the Amazon Cognito identity provider.
public var cognitoIdentity: String? { get }
/// For invocations from the AWS Mobile SDK, data about the client application and device.
public var clientContext: String? { get }
/// `Logger` to log with.
///
/// - note: The `LogLevel` can be configured using the `LOG_LEVEL` environment variable.
public var logger: Logger { get }
}
We introduce three handler protocols: StreamingLambdaHandler
, LambdaHandler
, and
LambdaWithBackgroundProcessingHandler
.
The new StreamingLambdaHandler
protocol is the base protocol to implement a Lambda function. Most users will not use
this protocol and instead use the LambdaHandler
protocol defined below.
/// The base StreamingLambdaHandler protocol
public protocol StreamingLambdaHandler {
/// The business logic of the Lambda function
/// - Parameters:
/// - event: The invocation's input data
/// - responseWriter: A ``LambdaResponseStreamWriter`` to write the invocation's response to.
/// If no response or error is written to the `responseWriter` it will
/// report an error to the invoker.
/// - context: The LambdaContext containing the invocation's metadata
/// - Throws:
/// How the thrown error will be handled by the runtime:
/// - An invocation error will be reported if the error is thrown before the first call to
/// ``LambdaResponseStreamWriter.write(_:)``.
/// - If the error is thrown after call(s) to ``LambdaResponseStreamWriter.write(_:)`` but before
/// a call to ``LambdaResponseStreamWriter.finish()``, the response stream will be closed and trailing
/// headers will be sent.
/// - If ``LambdaResponseStreamWriter.finish()`` has already been called before the error is thrown, the
/// error will be logged.
mutating func handle(_ event: ByteBuffer, responseWriter: some LambdaResponseStreamWriter, context: LambdaContext) async throws
}
Using this protocol requires the handle
method to receive the incoming event as a ByteBuffer
and return the output
as a ByteBuffer
too.
Through the LambdaResponseStreamWriter
, which is passed as an argument in the handle
function, the response can be
streamed by calling the write(_:)
function of the LambdaResponseStreamWriter
with partial data repeatedly before
finally closing the response stream by calling finish()
. Users can also choose to return the entire output and not
stream the response by calling writeAndFinish(_:)
.
This protocol also allows for background tasks to be run after a result has been reported to the AWS Lambda control
plane, since the handle(...)
function is free to implement any background work after the call to
responseWriter.finish()
.
The protocol is defined in a way that supports a broad range of use-cases. The handle method is marked as mutating
to
allow handlers to be implemented with a struct
.
An implementation that sends the number 1 to 10 every 500ms could look like this:
struct SendNumbersWithPause: StreamingLambdaHandler {
func handle(
_ event: ByteBuffer,
responseWriter: some LambdaResponseStreamWriter,
context: LambdaContext
) async throws {
for i in 1...10 {
// Send partial data
responseWriter.write(ByteBuffer(string: #"\#(i)\n\r"#))
// Perform some long asynchronous work
try await Task.sleep(for: .milliseconds(500))
}
// All data has been sent. Close off the response stream.
responseWriter.finish()
}
}
This handler protocol will be the go-to choice for most use-cases because it is completely agnostic to any
encoding/decoding logic -- conforming objects simply have to implement the handle
function where the input and return
types are Swift objects.
Note that the handle
function does not receive a LambdaResponseStreamWriter
as an argument. Response streaming is
not viable for LambdaHandler
because the output has to be encoded prior to it being sent, e.g. it is not possible to
encode a partial/incomplete JSON string.
public protocol LambdaHandler {
/// Generic input type
/// The body of the request sent to Lambda will be decoded into this type for the handler to consume
associatedtype Event
/// Generic output type
/// This is the return type of the handle() function.
associatedtype Output
/// The business logic of the Lambda function. Receives a generic input type and returns a generic output type.
/// Agnostic to encoding/decoding
mutating func handle(_ event: Event, context: LambdaContext) async throws -> Output
}
This protocol is exactly like LambdaHandler
, with the only difference being the added support for executing background
work after the result has been sent to the AWS Lambda control plane.
This is achieved by not having a return type in the handle
function. The output is instead written into a
LambdaResponseWriter
that is passed in as an argument, meaning that the handle
function is then free to implement
any background work after the result has been sent to the AWS Lambda control plane.
LambdaResponseWriter
has different semantics to the LambdaResponseStreamWriter
. Where the write(_:)
function of
LambdaResponseStreamWriter
means writing into a response stream, the write(_:)
function of LambdaResponseWriter
simply serves as a mechanism to return the output without explicitly returning from the handle
function.
public protocol LambdaResponseWriter<Output> {
associatedtype Output
/// Sends the generic Output object (representing the computed result of the handler)
/// to the AWS Lambda response endpoint.
/// An error will be thrown if this function is called more than once.
func write(_: Output) async throws
}
public protocol LambdaWithBackgroundProcessingHandler {
/// Generic input type
/// The body of the request sent to Lambda will be decoded into this type for the handler to consume
associatedtype Event
/// Generic output type
/// This is the type that the handle() function will send through the ``LambdaResponseWriter``.
associatedtype Output
/// The business logic of the Lambda function. Receives a generic input type and returns a generic output type.
/// Agnostic to JSON encoding/decoding
func handle(
_ event: Event,
outputWriter: some LambdaResponseWriter<Output>,
context: LambdaContext
) async throws
}
struct BackgroundProcessingHandler: LambdaWithBackgroundProcessingHandler {
struct Input: Decodable {
let message: String
}
struct Greeting: Encodable {
let echoedMessage: String
}
typealias Event = Input
typealias Output = Greeting
func handle(
_ event: Event,
outputWriter: some LambdaResponseWriter<Output>,
context: LambdaContext
) async throws {
// Return result to the Lambda control plane
try await outputWriter.write(result: Greeting(echoedMessage: event.messageToEcho))
// Perform some background work, e.g:
try await Task.sleep(for: .seconds(10))
// Exit the function. All asynchronous work has been executed before exiting the scope of this function.
// Follows structured concurrency principles.
return
}
}
Since the StreamingLambdaHandler
protocol is the base protocol the LambdaRuntime
works with, there are adapters to
make both LambdaHandler
and LambdaWithBackgroundProcessingHandler
compatible with StreamingLambdaHandler
.
-
LambdaHandlerAdapter
accepts aLambdaHandler
and conforms it toLambdaWithBackgroundProcessingHandler
. This is achieved by taking the genericOutput
object returned from thehandle
function ofLambdaHandler
and passing it to thewrite(_:)
function of theLambdaResponseWriter
. -
LambdaCodableAdapter
accepts aLambdaWithBackgroundProcessingHandler
and conforms it toStreamingLambdaHandler
. This is achieved by wrapping theLambdaResponseWriter
with theLambdaResponseStreamWriter
provided byStreamingLambdaHandler
. A call to thewrite(_:)
function ofLambdaResponseWriter
is translated into a call to thewriteAndFinish(_:)
function ofLambdaResponseStreamWriter
.
Both LambdaHandlerAdapter
and LambdaCodableAdapter
are described in greater detail in the Codable Support
section.
To summarize, LambdaHandler
can be used with the LambdaRuntime
by first going through LambdaHandlerAdapter
and
then through LambdaCodableAdapter
. LambdaWithBackgroundHandler
just requires LambdaCodableAdapter
.
For the common JSON-in and JSON-out use-case, there is an extension on LambdaRuntime
that abstracts away this wrapping
from the user.
LambdaRuntime
is the class that communicates with the Lambda control plane as defined in
Building a custom runtime for AWS Lambda and
forward the invocations to the provided StreamingLambdaHandler
. It will conform to ServiceLifecycle.Service
to
provide support for swift-service-lifecycle
.
/// The LambdaRuntime object. This object communicates with the Lambda control plane
/// to fetch work and report errors.
public final class LambdaRuntime<Handler>: ServiceLifecycle.Service, Sendable
where Handler: StreamingLambdaHandler
{
/// Create a LambdaRuntime by passing a handler, an eventLoop and a logger.
/// - Parameter handler: A ``StreamingLambdaHandler`` that will be invoked
/// - Parameter eventLoop: An ``EventLoop`` on which the LambdaRuntime will be
/// executed. Defaults to an EventLoop from
/// ``NIOSingletons.posixEventLoopGroup``.
/// - Parameter logger: A logger
public init(
handler: sending Handler,
eventLoop: EventLoop = Lambda.defaultEventLoop,
logger: Logger = Logger(label: "Lambda")
)
/// Create a LambdaRuntime by passing a ``StreamingLambdaHandler``.
public convenience init(handler: sending Handler)
/// Starts the LambdaRuntime by connecting to the Lambda control plane to ask
/// for events to process. If the environment variable AWS_LAMBDA_RUNTIME_API is
/// set, the LambdaRuntime will connect to the Lambda control plane. Otherwise
/// it will start a mock server that can be used for testing at port 8080
/// locally.
/// Cancel the task that runs this function to close the communication with
/// the Lambda control plane or close the local mock server. This function
/// only returns once cancelled.
public func run() async throws
}
The current API allows for a Lambda function to be tested locally through a mock server by requiring an environment
variable named LOCAL_LAMBDA_SERVER_ENABLED
to be set to true
. If this environment variable is not set, the program
immediately crashes as the user will not have the AWS_LAMBDA_RUNTIME_API
environment variable on their local machine
(set automatically when deployed to AWS Lambda). However, making the user set the LOCAL_LAMBDA_SERVER_ENABLED
environment variable is an unnecessary step that can be avoided. In the v2 API, the run()
function will automatically
start the mock server when the AWS_LAMBDA_RUNTIME_API
environment variable cannot be found.
We also add an enum to store a static function and a property on. We put this on the static Lambda
because
LambdaRuntime
is generic and thus has bad ergonomics for static properties and functions.
enum Lambda {
/// This returns the default EventLoop that a LambdaRuntime is scheduled on.
/// It uses `NIOSingletons.posixEventLoopGroup.next()` under the hood.
public static var defaultEventLoop: any EventLoop { get }
/// Report a startup error to the Lambda Control Plane API
public static func reportStartupError(any Error) async
}
Since the library now provides ownership of the main()
function and allows users to initialize services before the
LambdaRuntime
is initialized, the library cannot implicitly report
errors that occur during initialization to the dedicated endpoint AWS exposes
like it currently does through the initialize()
function of LambdaRunner
which wraps the handler's init(...)
and
handles any errors thrown by reporting it to the dedicated AWS endpoint.
To retain support for initialization error reporting, the Lambda.reportStartupError(any Error)
function gives users
the option to manually report initialization errors in their closure handler. Although this should ideally happen
implicitly like it currently does in v1, we believe this is a small compromise in comparison to the benefits gained in
now being able to cleanly manage the lifecycles of required services in a structured concurrency manner.
Use-case:
Assume we want to load a secret for the Lambda function from a secret vault first. If this fails, we want to report the error to the control plane:
let secretVault = SecretVault() do { /// !!! Error thrown: secret "foo" does not exist !!! let secret = try await secretVault.getSecret("foo") let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in /// Lambda business logic } let serviceGroup = ServiceGroup( services: [postgresClient, runtime], configuration: .init(gracefulShutdownSignals: [.sigterm]), logger: logger ) try await serviceGroup.run() } catch { /// Report startup error straight away to the dedicated initialization error endpoint try await Lambda.reportStartupError(error) }
The LambdaHandler
and LambdaWithBackgroundProcessingHandler
protocols abstract away encoding/decoding logic from the
conformers as they are generic over custom Event
and Output
types. We introduce two adapters LambdaHandlerAdapter
and CodableLambdaAdapter
that implement the encoding/decoding logic and in turn allow the respective handlers to
conform to StreamingLambdaHandler
.
Any handler conforming to LambdaHandler
can be conformed to LambdaWithBackgroundProcessingHandler
through
LambdaHandlerAdapter
.
/// Wraps an underlying handler conforming to ``LambdaHandler``
/// with ``LambdaWithBackgroundProcessingHandler``.
public struct LambdaHandlerAdapter<
Event: Decodable,
Output,
Handler: LambdaHandler
>: LambdaWithBackgroundProcessingHandler where Handler.Event == Event, Handler.Output == Output {
let handler: Handler
/// Register the concrete handler.
public init(handler: Handler)
/// 1. Call the `self.handler.handle(...)` with `event` and `context`.
/// 2. Pass the generic `Output` object returned from `self.handler.handle(...)` to `outputWriter.write(_:)`
public func handle(_ event: Event, outputWriter: some LambdaResponseWriter<Output>, context: LambdaContext) async throws
}
LambdaCodableAdapter
accepts any generic underlying handler conforming to LambdaWithBackgroundProcessingHandler
. It
also accepts any encoder and decoder object conforming to the LambdaEventDecoder
and LambdaOutputEncoder
protocols:
public protocol LambdaEventDecoder {
/// Decode the ByteBuffer representing the received event into the generic type Event
/// the handler will receive
func decode<Event: Decodable>(_ type: Event.Type, from buffer: ByteBuffer) throws -> Event
}
public protocol LambdaOutputEncoder {
/// Encode the generic type Output the handler has produced into a ByteBuffer
func encode<Output: Encodable>(_ value: Output, into buffer: inout ByteBuffer) throws
}
We provide conformances for Foundation's JSONDecoder
to LambdaEventDecoder
and JSONEncoder
to
LambdaOutputEncoder
.
LambdaCodableAdapter
implements its handle()
method by:
- Decoding the
ByteBuffer
event into the genericEvent
type. - Wrapping the
LambdaResponseStreamWriter
with a concreteLambdaResponseWriter
such that calls toLambdaResponseWriter
swrite(_:)
are mapped toLambdaResponseStreamWriter
swriteAndFinish(_:)
.- Note that the argument to
LambdaResponseWriter
swrite(_:)
is a genericOutput
object whereasLambdaResponseStreamWriter
swriteAndFinish(_:)
requires aByteBuffer
. - Therefore, the concrete implementation of
LambdaResponseWriter
also accepts an encoder. Itswrite(_:)
function first encodes the genericOutput
object and then passes it to the underlyingLambdaResponseStreamWriter
.
- Note that the argument to
- Passing the generic
Event
instance, the concreteLambdaResponseWriter
, as well as theLambdaContext
to the underlying handler'shandle()
method.
LambdaCodableAdapter
can implement encoding/decoding for any handler conforming to
LambdaWithBackgroundProcessingHandler
if Event
is Decodable
and the Output
is Encodable
or Void
, meaning
that the encoding/decoding stubs do not need to be implemented by the user.
/// Wraps an underlying handler conforming to `LambdaWithBackgroundProcessingHandler`
/// with encoding/decoding logic
public struct LambdaCodableAdapter<
Handler: LambdaWithBackgroundProcessingHandler,
Event: Decodable,
Output,
Decoder: LambdaEventDecoder,
Encoder: LambdaOutputEncoder
>: StreamingLambdaHandler where Handler.Output == Output, Handler.Event == Event {
/// Register the concrete handler, encoder, and decoder.
public init(
handler: Handler,
encoder: Encoder,
decoder: Decoder
) where Output: Encodable
/// For handler with a void output -- the user doesn't specify an encoder.
public init(
handler: Handler,
decoder: Decoder
) where Output == Void, Encoder == VoidEncoder
/// 1. Decode the invocation event using `self.decoder`
/// 2. Create a concrete `LambdaResponseWriter` that maps calls to `write(_:)` with the `responseWriter`s `writeAndFinish(_:)`
/// 2. Call the underlying `self.handler.handle()` method with the decoded event data, the concrete `LambdaResponseWriter`,
/// and the `LambdaContext`.
public mutating func handle(
_ request: ByteBuffer,
responseWriter: some LambdaResponseStreamWriter,
context: LambdaContext
) async throws
}
To create a Lambda function using the current API, a user first has to create an object and conform it to one of the
handler protocols by implementing the initializer and the handle(...)
function. Now that LambdaRuntime
is public,
this verbosity can very easily be simplified.
This handler is generic over any Event
type conforming to Decodable
and any Output
type conforming to Encodable
or Void
.
public struct ClosureHandler<Event, Output>: LambdaHandler {
/// Initialize with a closure handler over generic Input and Output types
public init(body: @escaping (Event, LambdaContext) async throws -> Output) where Output: Encodable
/// Initialize with a closure handler over a generic Input type (Void Output).
public init(body: @escaping (Event, LambdaContext) async throws -> Void) where Output == Void
/// The business logic of the Lambda function.
public func handle(_ event: Event, context: LambdaContext) async throws -> Output
}
Given that ClosureHandler
conforms to LambdaHandler
:
- We can extend the
LambdaRuntime
initializer such that it accepts a closure as an argument. - Within the initializer, the closure handler is wrapped with
LambdaCodableAdapter
.
extension LambdaRuntime {
/// Initialize a LambdaRuntime with a closure handler over generic Event and Output types.
/// This initializer bolts on encoding/decoding logic by wrapping the closure handler with
/// LambdaCodableAdapter.
public init<Event: Decodable, Output: Encodable>(
body: @escaping (Event, LambdaContext) async throws -> Output
) where Handler == LambdaCodableAdapter<ClosureHandler<Event, Output>, Event, Output, JSONDecoder, JSONEncoder>
/// Same as above but for handlers with a void output
public init<Event: Decodable>(
body: @escaping (Event, LambdaContext) async throws -> Void
) where Handler == LambdaCodableAdapter<ClosureHandler<Event, Void>, Event, Void, JSONDecoder, VoidEncoder>
}
We can now significantly reduce the verbosity and leverage Swift's trailing closure syntax to cleanly create and run a Lambda function, abstracting away the decoding and encoding logic from the user:
/// The type the handler will use as input
struct Input: Decodable {
var message: String
}
/// The type the handler will output
struct Greeting: Encodable {
var echoedMessage: String
}
/// A simple Lambda function that echoes the input
let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in
Greeting(echoedMessage: event.message)
}
try await runtime.run()
We also add a StreamingClosureHandler
conforming to StreamingLambdaHandler
for use-cases where the user wants to
handle encoding/decoding themselves:
public struct StreamingClosureHandler: StreamingLambdaHandler {
public init(
body: @escaping sending (ByteBuffer, LambdaResponseStreamWriter, LambdaContext) async throws -> ()
)
public func handle(
_ request: ByteBuffer,
responseWriter: LambdaResponseStreamWriter,
context: LambdaContext
) async throws
}
extension LambdaRuntime {
public init(
body: @escaping sending (ByteBuffer, LambdaResponseStreamWriter, LambdaContext) async throws -> ()
)
}
We considered using [UInt8]
instead of ByteBuffer
in the base LambdaHandler
API. We decided to use ByteBuffer
for two reasons.
- 99% of use-cases will use the JSON codable API and will not directly get in touch with ByteBuffer anyway. For those
users it does not matter if the base API uses
ByteBuffer
or[UInt8]
. - The incoming and outgoing data must be in the
ByteBuffer
format anyway, as Lambda uses SwiftNIO under the hood and SwiftNIO usesByteBuffer
in its APIs. By usingByteBuffer
we can save a copies to and from[UInt8]
. This will reduce the invocation time for all users. - The base
LambdaHandler
API is most likely mainly being used by developers that want to integrate their web framework with Lambda (examples: Vapor, Hummingbird, ...). Those developers will most likely prefer to get the data in theByteBuffer
format anyway, as their lower level networking stack also depends on SwiftNIO.
Users create a LambdaResponse, that supports streaming instead of being passed a LambdaResponseStreamWriter
Instead of passing the LambdaResponseStreamWriter
in the invocation we considered a new type LambdaResponse
, that
users must return in the StreamingLambdaHandler
.
Its API would look like this:
/// A response returned from a ``LambdaHandler``.
/// The response can be empty, a single ByteBuffer or a response stream.
public struct LambdaResponse {
/// A writer to be used when creating a streamed response.
public struct Writer {
/// Writes data to the response stream
public func write(_ byteBuffer: ByteBuffer) async throws
/// Closes off the response stream
public func finish() async throws
/// Writes the `byteBuffer` to the response stream and subsequently closes the stream
public func writeAndFinish(_ byteBuffer: ByteBuffer) async throws
}
/// Creates an empty lambda response
public init()
/// Creates a LambdaResponse with a fixed ByteBuffer.
public init(_ byteBuffer: ByteBuffer)
/// Creates a streamed lambda response. Use the ``Writer`` to send
/// response chunks on the stream.
public init(_ stream: @escaping sending (Writer) async throws -> ())
}
The StreamingLambdaHandler
would look like this:
/// The base LambdaHandler protocol
public protocol StreamingLambdaHandler {
/// The business logic of the Lambda function
/// - Parameters:
/// - event: The invocation's input data
/// - context: The LambdaContext containing the invocation's metadata
/// - Returns: A LambdaResponse, that can be streamed
mutating func handle(
_ event: ByteBuffer,
context: LambdaContext
) async throws -> LambdaResponse
}
There are pros and cons for the API that returns the LambdaResponses
and there are pros and cons for the API that
receives a LambdaResponseStreamWriter
as a parameter.
Concerning following structured concurrency principles the approach that receives a LambdaResponseStreamWriter
as a
parameter has benefits as the lifetime of the handle function is tied to the invocation runtime. The approach that
returns a LambdaResponse
splits the invocation into two separate function calls. First the handle method is invoked,
second the LambdaResponse
writer closure is invoked. This means that it is impossible to use Swift APIs that use
with
style lifecycle management patterns from before creating the response until sending the full response stream off.
For example, users instrumenting their lambdas with Swift tracing likely can not use the withSpan
API for the full
lifetime of the request, if they return a streamed response.
However, if it comes to consistency with the larger Swift on server ecosystem, the API that returns a LambdaResponse
is likely the better choice. Hummingbird v2, OpenAPI and the new Swift gRPC v2 implementation all use this approach.
This might be due to the fact that writing middleware becomes easier, if a Response is explicitly returned.
We decided to implement the approach in which a LambdaResponseStreamWriter
is passed to the function, since the
approach in which a LambdaResponse
is returned can trivially be built on top of it. This is not true vice versa.
We welcome the discussion on this topic and are open to change our minds and API here.
Initially we proposed an explicit addBackgroundTask(_:)
function in LambdaContext
that users could call from their
handler object to schedule a background task to be run after the result is reported to AWS. We received feedback that
this approach for supporting background tasks does not exhibit structured concurrency, as code could still be in
execution after leaving the scope of the handle(...)
function.
For handlers conforming to the StreamingLambdaHandler
, addBackgroundTask(_:)
was anyways unnecessary as background
work could be executed in a structured concurrency manner within the handle(...)
function after the call to
LambdaResponseStreamWriter.finish()
.
For handlers conforming to the LambdaHandler
protocol, we considered extending LambdaHandler
with a
performPostHandleWork(...)
function that will be called after the handle
function by the library. Users wishing to
add background work can override this function in their LambdaHandler
conforming object.
public protocol LambdaHandler {
associatedtype Event
associatedtype Output
func handle(_ event: Event, context: LambdaContext) async throws -> Output
func performPostHandleWork(...) async throws -> Void
}
extension LambdaHandler {
// User's can override this function if they wish to perform background work
// after returning a response from ``handle``.
func performPostHandleWork(...) async throws -> Void {
// nothing to do
}
}
Yet this poses difficulties when the user wishes to use any state created in the handle(...)
function as part of the
background work.
In general, the most common use-case for this library will be to implement simple Lambda functions that do not have requirements for response streaming, nor to perform any background work after returning the output. To keep things easy for the common use-case, and with Swift's principle of progressive disclosure of complexity in mind, we settled on three handler protocols:
LambdaHandler
: Most common use-case. JSON-in, JSON-out. Does not support background work execution. An intuitivehandle(event: Event, context: LambdaContext) -> Output
API that is simple to understand, i.e. users are not exposed to the concept of sending their response through a writer.LambdaHandler
can be very cleanly implemented and used withLambdaRuntime
, especially withClosureHandler
.LambdaWithBackgroundProcessingHandler
: If users wish to augment theirLambdaHandler
with the ability to run background tasks, they can easily migrate. A user simply has to:- Change the conformance to
LambdaWithBackgroundProcessingHandler
. - Add an additional
outputWriter: some LambdaResponseWriter<Output>
argument to thehandle
function. - Replace the
return ...
withoutputWriter.write(...)
. - Implement any background work after
outputWriter.write(...)
.
- Change the conformance to
StreamingLambdaHandler
: This is the base handler protocol which is intended to be used directly only for advanced use-cases. Users are provided the invocation event as aByteBuffer
and aLambdaResponseStreamWriter
where the computed result (asByteBuffer
) can either be streamed (with repeated calls towrite(_:)
) or sent all at once (with a single call towriteAndFinish(_:)
). After closing theLambdaResponseStreamWriter
, any background work can be implemented.
We initially proposed to make the LambdaResponseStreamWriter
and LambdaResponseWriter
protocols ~Copyable
, with
the functions that close the response having the consuming
ownership keyword. This was so that the compiler could
enforce the restriction of not being able to interact with the writer after the response stream has closed.
However, non-copyable types do not compose nicely and add complexity for users. Further, for the compiler to actually
enforce the consuming
restrictions, user's have to explicitly mark the writer argument as consuming
in the handle
function.
Therefore, throwing appropriate errors to prevent abnormal interaction with the writers seems to be the simplest approach.
We are aware that AWS Lambda Runtime has not reached a proper 1.0. We intend to keep the current implementation around at 1.0-alpha. We don't want to change the current API without releasing a new major. We think there are lots of adopters out there that depend on the API in v1. Because of this we intend to release the proposed API here as AWS Lambda Runtime v2.