How to use Nginx to build an HTTP file server to download files under Windows?

11-03-2023

WHAT IS THE NGINX ? Nginx is a lightweight HTTP server with an event-driven asynchronous non-blocking processing framework, which makes it have excellent IO performance and is often used for reverse proxy and load balancing on the server.

It was created by Russian Igor · Sesoyev developed it for Rambler.ru, the second most visited site in Russia, and it was first published in 2004.

Web server: it is responsible for handling and responding to user requests, and is also commonly called http server, such as Apache, IIS and Nginx application server; it is a server for storing and running system programs, and is responsible for handling business logic in programs, such as Tomcat, weblogic and Jboss (most application servers now also include the functions of Web servers).

What is Nginx? To sum it up, this is it:

A lightweight web server

The design idea is event-driven asynchronous non-blocking processing (like node.js)

Less memory occupation, fast startup speed and strong concurrency.

Develop with c language

Good scalability, there are many third-party plug-ins.

Widely used in Internet projects.

WHY WE USE NGINX? Nginx is the top three service server in the world, and its users have grown very fast in recent years.

According to some statistics, about one-third of the websites in the world use Nginx. Nginx is a common structural component of many large websites, including Baidu, Ali, Tencent, JD.COM, Netease, Sina and DJI.

Nginx has simple installation and simple configuration, but its function is irreplaceable.

HOW TO USE NGINX ? Download (via connection or official website):

http://nginx.org/

start

Nginx basic commands under windows:

First, use CMD to jump to the NGINX installation directory.

Start Nginx:

start nginx

Stop Nginx:

nginx -s stop / nginx -s quit

Hot restart Nginx:

nginx -s reload

Forcibly stop Nginx:

pkill -9 nginx

Modify configuration

Open nginx.conf

Found the first server {}

Add:

server { listen 8099; # Visit the port number server_name (your host IP) localhost; # Your server name (access name) root F:Nginx_text; # The file directory you need to put on the NGINX server # autoindex for Nginx location ~ (.*)/$ {allowall;    autoindex on; # Open the directory to browse autoindex_localtime on; # autoindex_exact_size off with the file time of the server as the display time; # After switching off, display the file size in a readable way, in KB, MB or GB charset utf-8,gbk; # Show Chinese file name # This paragraph is to beautify the interface. You need to download the plug-in first and then add the following line of configuration. If it is not ugly, you can comment directly # add _ after _ body/.autoindex/footer.html; } # location ~/.(mp4 | doc | pdf) $ {# By default, .mp4 and. pdf formats will be opened in the browser, which can be set to be downloaded directly by clicking. # Add this paragraph and click on any file to download it.   location ~ ^/(.*)$ {  add_header Content-Disposition "attachment;  filename=$1"; }

Server{} is actually contained within http{}. Each server{} is a virtual host (site).

The above code block means that when a request is called localhost:8099 requesting nginx server, the request will be matched into the server{} of the code block for execution.

Of course, there are many configurations of nginx, which can be configured according to the document when used.

After the configuration is completed, you can use your host IP+:+port number to access NGINX server.

Some classic operations

Rapid deployment of static applications

Reference code:

server {  listen 8080;   server_name localhost;  location / {  # root html; # Nginx default path root/usr/local/var/www/my-project; # Set to the root directory path index of personal project; index.htm; index.html; }}

Request filtering

Set access whitelist

When your project has no gray environment and you want to let the test colleagues try it out first after the function goes online, you need to set up a white list for access.

If you use nginx as an agent for your project, you will find it a piece of cake.

Reference code:

server { listen 8080;   server_name localhost; Location/{# IP access restriction (only machines with IP of 10.81.1.11 are allowed to access) allow 10.81.1.11;   deny all;    root html;   index  index.html  index.htm ; }}

Configure picture anti-theft chain

Reference code:

server { listen 8080;   server_name localhost;  location / {  root /usr/local/var/www/my-project; # Set to the root directory path index of personal project; index.htm; index.html; } # picture security chain location ~ *. (gif | jpg | JPEG | png | BMP | swf) $ {valid _ referrers none blocked 192.168.0.103; # Only if ($invalid_referer){ return 403; } }}

The code block above sets that only the local IP external chain is allowed to refer to the picture resources, and requests under other domain names will be prohibited from being accessed by 403.

Solve cross-domain problems

Reference code:

server { listen 8080;   server_name localhost; Location/{# Cross-domain proxy sets proxy _ pass http://www.proxy.com; # To realize cross-domain domain name Add _ Headeraccess-control-allow-origin *;   add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';   add_header Access-Control-Allow-Headers 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization'; }}

The idea is to add a request header that can be accessed across domains in the process of reverse proxy request.

What are the applications of Nginx? Static and dynamic separation 

As shown in the above figure, the separation of static and dynamic is actually that the Nginx server divides the received requests into dynamic requests and static requests.

Static requests directly get the corresponding resources from the root directory path set by nginx server, and dynamic requests are forwarded to the real background (the application server mentioned above, like Tomcat in the figure) for processing.

Doing so can not only reduce the pressure on the application server, but also serve the background api interface, and can also develop and deploy the front and rear codes separately and in parallel.

server {  listen 8080;   server_name localhost;  location / {  root html; # Nginx default index index.html index.htm; } # Static configuration, all static requests are forwarded to nginx for processing, and the storage directory is My-Project Location ~. *. (HTML | htm | GIF | JPG | JPEG | BMP | PNG | ICO | JS | CSS) $ {root/usr/local/var/www/my-project; # The root directory to which the static request is proxied} # If the dynamic request matches with the path' node', it will be forwarded to port 8002 for processing location/node/{proxy _ passhttp://localhost: 8002; # Act as a service agent}

Accessing the static resource nginx server will return the files in my-project, such as obtaining index.html:

The access dynamic request nginx server will return the content it requested from port 8002 intact:

Reverse proxy What is a reverse proxy? A reverse proxy is actually similar to when you go to a purchasing agent to help you buy things (a browser or other terminal asks nginx). You don't care where he goes to buy things, as long as he helps you buy what you want (the browser or other terminal finally gets what he wants, but it doesn't know where he got it).

The function of reverse proxy ensures the security of application server (adding a layer of proxy can shield dangerous attacks and control permissions more conveniently).

Realize load balancing (wait a moment ~ I will talk about it later)

Realize cross-domain (claimed to be the simplest cross-domain way)

It is easy to configure a simple reverse proxy, and the code is as follows.

server {  listen 8080;   server_name localhost;  location / {  root html; # Nginx default index index.html index.htm;  }  proxy_pass http://localhost:8000; # Reverse the proxy configuration, and the request will be forwarded to port 8000}

The performance of reverse proxy is simple. For the code block above, it is actually that requesting localhost:8080 from nginx has the same effect as requesting http://localhost:8000. (The same principle as purchasing)

This is the simplest model of a reverse proxy, just to illustrate the configuration of the reverse proxy. But in reality, reverse proxy is mostly used in load balancing.

Nginx acts as a proxy in the graph. At the time of request, the three clients on the left could not perceive the existence of the three servers by obtaining the content from NGINX.

At this point, the proxy acts as the reverse proxy of the three server.

CDN service is one of the typical applications of reverse proxy, which shows that reverse proxy is widely used in various fields. Reverse proxy is the basis of load balancing applied in the architecture of many large companies, and there are other ways to realize it.

Load Balancing What is load balancing? With the continuous growth of business and the increasing number of users, a service can no longer meet the requirements of the system. At this time, the server cluster appeared.

In a server cluster, Nginx can distribute the received client requests evenly (strictly speaking, not necessarily evenly, but by setting weights) to all servers in this cluster. This is called load balancing.

The schematic diagram of load balancing is as follows:

The role of load balancing

Share server cluster pressure

Ensure the stability of client access.

As mentioned earlier, load balancing can solve the problem of sharing the pressure of server clusters. In addition, Nginx also has a health check (server heartbeat check) function, which will periodically poll and send health check requests to all servers in the cluster to check whether any servers in the cluster are in an abnormal state.

Once a server is found to be abnormal, the client requests that come in after this will not be sent to the server (until the health check finds that the server has returned to normal), thus ensuring the stability of client access.

Configuring load balancing Configuring a simple load balancing is not complicated, and the code is as follows:

# Load balancing: Set Domain Upstream Domain {serverlocalhost: 8000;  server localhost:8001; }server {  listen 8080;   server_name localhost;  location / {  # root html; # Nginx default value # index index.html index.htm;    proxy_pass http://domain; # Load balancing configuration, requests will be evenly distributed to 8000 and 8001 ports proxy _ set _ headerhost $ host: $ server _ port; }}

8000 and 8001 are two services that I started locally with Node.js After the load balancing is successful, you can see the pages that visit localhost:8080 sometimes visit the 8000 port, and sometimes visit the pages that visit the 8001 port.

If you can see this effect, it means that the load balancing strategy you configured has taken effect.

The load balancing in the actual project is far more complicated than this case, but all changes are derived from this ideal model.

Limited by resources such as memory of a single server in a cluster, the number of servers in a load-balanced cluster cannot be increased indefinitely. However, due to its good fault-tolerant mechanism, load balancing has become an essential part of realizing high availability architecture.

Forward proxy Forward proxy is just the opposite of the reverse principle. Take the purchasing case above as an example. If many people buy the same product, purchasing will find the store of the product at one time to buy it. In this process, the shopkeeper didn't know that purchasing was to help others buy things. Then purchasing acts as a positive agent for many customers who want to buy goods.

The schematic diagram of the forward proxy is as follows:

Nginx acts as a proxy in the graph. The three clients on the left get the contents from nginx when they request, and the server doesn't feel the existence of the three clients.

At this point, the proxy acts as a forward proxy for the three client.

Forward proxy means a server located between the client and the origin server. In order to obtain the content from the origin server, the client sends a request to the proxy and specifies the target (the origin server), and then the proxy forwards the request to the origin server and returns the obtained content to the client. The client can use the forward proxy. When you need to use your server as a proxy server, you can use Nginx to realize the forward proxy.

KX vpn (commonly known as F-wall) is actually a forward proxy tool.

The vpn will proxy the web page request that wants to access the server outside the wall to a proxy server proxy that can access the website. This proxy server will forward the web page content obtained from the server outside the firewall to the client. The proxy server proxy was built by Nginx.

Copyright Description:No reproduction without permission。

Knowledge sharing community for developers。

Let more developers benefit from it。

Help developers share knowledge through the Internet。

Follow us