In the first part of this series I want to introduce the readers to the world of openresty(What and Why?) without actually digging into the implementation(How) of any feature. In the next part of this series we will take a simple use case and try and build a small feature with openresty.
OpenResty is a full fledged Web Application framework that integrates NGINX, LuaJIT and a lot of third-party lua libraries.It turns the NGINX server into a powerful web app server allowing web developers to use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that can handle 10K ~ 1000K+ connections in a single box.
We can build Web Gateways, Web Application firewall and in the world of microservices we can do common stuff which otherwise would have to be done in all microservices like authentication, monitoring, logging, rate-limiting, IP whitelisting, and request transformations
How NGINX works?
Before diving into openresty let’s try and understand how NGINX handles HTTP requests:
- Receive the request
- Parse the request
- Find the virtual server
- Find the location
- Run Phase Handlers in sequence(More on this below)
- Generate the response
- Filter response headers
- Filter response body
- Send output/response to client
NGINX processes requests in Multiple phases. In each of these Phases there might be many Phase handlers invoked. Since NGINX has a well defined cycle of phases every request must go through before a final response is sent. What openresty offers is the ability to inject lua code into these request phases. For example let’s say you have some complex routing logic that needs to be executed before you decide which upstream server to be used. For this we can write a lua app with all this custom logic and invoke this during rewrite phase by using the rewrite_by_lua handler provided by openresty.
- Fast: OpenResty is nothing but NGINX Plus lua. We all know NGINX is fast. But what about lua? Openresty uses luajit(compiler for lua code) to compile lua code and the performance of the compiled code is almost to C level with a low memory footprint
- Concurrency: Each NGINX worker process has its own luajit instance to compile lua code. The worker processes are responsible for listening to incoming requests and pass them along to the respective phase handlers. These Phase handlers are run in independent light weight threads. This combination of multiple lua jit vms and each vm running multiple light weight thread makes it very concurrent