💾 Archived View for dioskouroi.xyz › thread › 29441272 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Golang is great for lambda - you compile in the entire lambda runtime and end up with a single 20MB executable that does everything needed. Other runtimes have a couple hundred megabytes of overhead for the lambda runtime and AWS SDK (though you don’t see the overhead unless your doing the ECR based lambdas). The part I’ve always struggled with is cloudformation only lets you inline 4K of code. The underlying api allows for payloads of up to 50mb before you have to pull from S3 or ECR, would love to see a little more flexibility - otherwise you have a lot of overhead to setup ci/cd pipelines and signal lambda to update from another source.
What I've started doing (and how I cover it in the book) is using the CDK to build and deploy the Lambda code. The `GoFunction`[0] construct handles building and deploying the code, so you don't have to set anything up manually.
Underneath it still happens by way of S3 & CloudFormation, but the CDK abstracts away a lot of the details, which makes it quite convenient to use.
[0]
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-g...
I forgot URLs don't become links in the submission text, so here's a clickable one :)
https://kevinmcconnell.gumroad.com/l/lambda-go-book/hn
Any experience with cold start times? I wrote a relatively simple lambda handler in Go, expecting it to be fast, but node seems to beat it. The Go version's memory usage is better, though.
I found usually they'd be on the order of a few hundred milliseconds. They tend to be quicker if you have more memory allocated to your function (because the additional CPU power helps).
I've mostly found that my cold starts were slow enough to look bad in the metrics, but fast enough (and rare enough) that the impact on user experience wasn't actually that noticeable. Given the other benefits I was getting from Lambda (like the easy scaling and low maintenance), it was worth the occasional small blip in latency.
And for functions that aren't directly user facing -- like processing items from a queue -- I've not found it to be an issue at all.
Of course every use case is different though, and some apps can tolerate this more than others.
Somewhat related, Lambda has the concept of provisioned capacity. So you can have a certain number of lambda containers running at all times, which will help a lot with the cold start. Obviously this is more expensive so you will have to weigh the benefits for your use case.
I have used Lambda extensively over the past three months in Python, Ruby and Node. The largest game changer for me was discovering Lambda Layers and realizing how I could avoid compiling zip folders to deploy. I’ve loved the iteration speed and performance, and recently have depended heavily on the Lambda + EFS combo.
I purchased the book and was surprised there wasn’t more on the EFS usage, but otherwise the book looks wonderful.
What you described with Lambda Layers sounds like a thin layer over Lambda container images[0]. What are the benefits to using traditional lambda + lambda layers vs lambda container images?
0.
https://docs.aws.amazon.com/lambda/latest/dg/images-create.h...
For clarification, Lambda Layers is the packaging of a node_module, vendor/gem, or python package folder, and then "sharing" that across various Lambdas, instead of deploying the shared files again and again.
The actual lambda file ends up being a few hundred bytes, because they are literally just one function in a single file.
There is no compiling involved in the Layers after its made once. The only changing part of the Lambda is the lambda_handler.
My understanding is that a custom lambda image is the nuclear option; having everything fit cleanly into a normal lambda archive is the preferred option; and lambda layers are a type of middle ground if you've outgrown one but don't quite need the other.
Have you found a scalable solution to manage layers with infrastructure as code? As soon as I introduced them to my team, they started demanding exploitations of the process and it became a mess. Would love to know the best practices! Also beware of EFS, our applications did some heavy lifting with EFS and we found a bug that broke our stack. Not sure it's completely production ready. Took Amazon well over a year to find it and reimburse :)
Thanks! And that's a good point about EFS, it's something I've found very useful too. I'm planning to add some updates to the book, so I'll put it on my list of things to include.
Thanks for the feedback!
Just curious, what did you typeset the book with?
I used Asciidoctor. It's mostly the default theme, but I tweaked a few things about it.
If you're interested in the details, I wrote a blog post that goes into it a bit more:
https://www.kevinwmcconnell.com/writing/how-i-wrote-my-book
That post also has a link to a repo with all the settings/theme/etc, so anyone can take it and use it as a starting point for their own book.
In the article, you reference `asciidoctor-epub`—I think you mean `asciidoctor-epub3`? Might be worth updating or correcting me here.
Thanks!
Vendor lock-in books aren’t my thing.
It’s fashionable to hate on k8s as a Google project but AWS is what props Amazon up, while k8s runs on an Rpi. Online retail is too easy to replicate.
IT people are feeding the beast they complain about; complexity through numerous APIs to do the same tasks. Security through obscurity, by letting Amazon decide what’s secure in their API sandbox.
Nevermind all the “walled garden” complaints about app stores. Cloud is the root node walled garden of computing. Where do you think all those apps get distributed from?
Way to build your own prison.