Serverless means someone else booted the JVM
My tweet https://twitter.com/sytses/status/757535719619526656 resonated with people: Each infrastructure paradigm improves application startup time by 100x: VMs 200s, Containers: 2s, Serverless: 20ms
References:
- https://twitter.com/adrianco/status/736553530689998848
- http://martinfowler.com/articles/serverless.html
- https://linuxacademy.com/blog/amazon-web-services-2/serverless-architecture/
- http://www.nextplatform.com/2016/07/27/first-kill-servers/
Compared to containers serverless means that you are sharing a JVM with other applications. So you don't have to wait seconds for it to boot or scale up and you don't pay when you don't use it.
Makes sense to me. For now I think that we still have to get all the benefits from containers. It used to be that containers were herded manually. With container schedulers this becomes easier in production. I still wonder what it will be in development:
- Run on development machine with localkube
- Run in cloud with Koding
First ones conforms with the traditional way. Second one needs an internet connection everywhere (recently possible) and fuse filesystem to use local editor (recently possible). It offers using less compute locally (battery, heat, noise) and more compute in the cloud (copy of production environment, more speed).
According to http://www.nextplatform.com/2016/07/27/first-kill-servers/ "As it turns out, the Lambda service runs on top of the EC2 compute service, which has compute carved up by a homegrown variant of the Xen hypervisor, not on bare metal machines, and then has a homegrown variant of Linux containers (LXC) abstracted on top of that to host the Lambda code snippets that get activated by events. Each customer using the AWS service has its Lambda functions running in their own unique virtual machines on top of Xen to ensure isolation across customers, just like EC2 compute slices affords." each customer has its own VM for security reasons. So it seems that the 'pay only per function' at this moment is purely a pricing model and not based on the underlying costs. Serverless doesn't seem more computationally efficient than container schedulers. It is more efficient by restricting you to 'only use Node functions of this Node version'. If you accept those restrictions it looks like running and scaling this is within reach of a good container scheduler.