29 min read

In this article by Shalabh Aggarwal, the author of Flask Framework Cookbook, we will talk about various application-deployment techniques, followed by some monitoring tools that are used post-deployment.

(For more resources related to this topic, see here.)

Deployment of an application and managing the application post-deployment is as important as developing it. There can be various ways of deploying an application, where choosing the best way depends on the requirements. Deploying an application correctly is very important from the points of view of security and performance. There are multiple ways of monitoring an application after deployment of which some are paid and others are free to use. Using them again depends on requirements and features offered by them.

Each of the tools and techniques has its own set of features. For example, adding too much monitoring to an application can prove to be an extra overhead to the application and the developers as well. Similarly, missing out on monitoring can lead to undetected user errors and overall user dissatisfaction.

Hence, we should choose the tools wisely and they will ease our lives to the maximum.

In the post-deployment monitoring tools, we will discuss Pingdom and New Relic. Sentry is another tool that will prove to be the most beneficial of all from a developer’s perspective.

Deploying with Apache

First, we will learn how to deploy a Flask application with Apache, which is, unarguably, the most popular HTTP server. For Python web applications, we will use mod_wsgi, which implements a simple Apache module that can host any Python applications that support the WSGI interface.

Remember that mod_wsgi is not the same as Apache and needs to be installed separately.

Getting ready

We will start with our catalog application and make appropriate changes to it to make it deployable using the Apache HTTP server.

First, we should make our application installable so that our application and all its libraries are on the Python load path. This can be done using a setup.py script. There will be a few changes to the script as per this application. The major changes are mentioned here:

packages=[
   'my_app',
   'my_app.catalog',
],
include_package_data=True,
zip_safe = False,

First, we mentioned all the packages that need to be installed as part of our application. Each of these needs to have an __init__.py file. The zip_safe flag tells the installer to not install this application as a ZIP file. The include_package_data statement reads from a MANIFEST.in file in the same folder and includes any package data mentioned here. Our MANIFEST.in file looks like:

recursive-include my_app/templates *
recursive-include my_app/static *
recursive-include my_app/translations *

Now, just install the application using the following command:

$ python setup.py install

Installing mod_wsgi is usually OS-specific. Installing it on a Debian-based distribution should be as easy as just using the packaging tool, that is, apt or aptitude. For details, refer to https://code.google.com/p/modwsgi/wiki/InstallationInstructions and https://github.com/GrahamDumpleton/mod_wsgi.

How to do it…

We need to create some more files, the first one being app.wsgi. This loads our application as a WSGI application:

activate_this = '<Path to virtualenv>/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
 
from my_app import app as application
import sys, logging
logging.basicConfig(stream = sys.stderr)

As we perform all our installations inside virtualenv, we need to activate the environment before our application is loaded. In the case of system-wide installations, the first two statements are not needed. Then, we need to import our app object as application, which is used as the application being served. The last two lines are optional, as they just stream the output to the the standard logger, which is disabled by mod_wsgi by default.

The app object needs to be imported as application, because mod_wsgi expects the application keyword.

Next comes a config file that will be used by the Apache HTTP server to serve our application correctly from specific locations. The file is named apache_wsgi.conf:

<VirtualHost *>
 
   WSGIScriptAlias / <Path to application>/flask_catalog_deployment/app.wsgi
 
   <Directory <Path to application>/flask_catalog_deployment>
       Order allow,deny
       Allow from all
   </Directory>
 
</VirtualHost>

The preceding code is the Apache configuration, which tells the HTTP server about the various directories where the application has to be loaded from.

The final step is to add the apache_wsgi.conf file to apache2/httpd.conf so that our application is loaded when the server runs:

Include <Path to application>/flask_catalog_deployment/
apache_wsgi.conf

How it works…

Let’s restart the Apache server service using the following command:

$ sudo apachectl restart

Open up http://127.0.0.1/ in the browser to see the application’s home page. Any errors coming up can be seen at /var/log/apache2/error_log (this path can differ depending on OS).

There’s more…

After all this, it is possible that the product images uploaded as part of the product creation do not work. For this, we should make a small modification to our application’s configuration:

app.config['UPLOAD_FOLDER'] = '<Some static absolute 
path>/flask_test_uploads'

We opted for a static path because we do not want it to change every time the application is modified or installed.

Now, we will include the path chosen in the preceding code to apache_wsgi.conf:

Alias /static/uploads/ "<Some static absolute 
path>/flask_test_uploads/" <Directory "<Some static absolute path>/flask_test_uploads">    Order allow,deny    Options Indexes    Allow from all    IndexOptions FancyIndexing </Directory>

After this, install the application and restart apachectl.

See also

Deploying with uWSGI and Nginx

For those who are already aware of the usefulness of uWSGI and Nginx, there is not much that can be explained. uWSGI is a protocol as well as an application server and provides a complete stack to build hosting services. Nginx is a reverse proxy and HTTP server that is very lightweight and capable of handling virtually unlimited requests. Nginx works seamlessly with uWSGI and provides many under-the-hood optimizations for better performance.

Getting ready

We will use our application from the last recipe, Deploying with Apache, and use the same app.wsgi, setup.py, and MANIFEST.in files. Also, other changes made to the application’s configuration in the last recipe will apply to this recipe as well.

Disable any other HTTP servers that might be running, such as Apache and so on.

How to do it…

First, we need to install uWSGI and Nginx. On Debian-based distributions such as Ubuntu, they can be easily installed using the following commands:

# sudo apt-get install nginx
# sudo apt-get install uWSGI

You can also install uWSGI inside a virtualenv using the pip install uWSGI command.

Again, these are OS-specific, so refer to the respective documentations as per the OS used.

Make sure that you have an apps-enabled folder for uWSGI, where we will keep our application-specific uWSGI configuration files, and a sites-enabled folder for Nginx, where we will keep our site-specific configuration files. Usually, these are already present in most installations in the /etc/ folder. If not, refer to the OS-specific documentations to figure out the same.

Next, we will create a file named uwsgi.ini in our application:

[uwsgi]
http-socket   = :9090
plugin   = python
wsgi-file = <Path to application>/flask_catalog_deployment/app.wsgi
processes   = 3

To test whether uWSGI is working as expected, run the following command:

$ uwsgi --ini uwsgi.ini

The preceding file and command are equivalent to running the following command:

$ uwsgi --http-socket :9090 --plugin python --wsgi-file app.wsgi

Now, point your browser to http://127.0.0.1:9090/; this should open up the home page of the application.

Create a soft link of this file to the apps-enabled folder mentioned earlier using the following command:

$ ln -s <path/to/uwsgi.ini> <path/to/apps-enabled>

Before moving ahead, edit the preceding file to replace http-socket with socket. This changes the protocol from HTTP to uWSGI (read more about it at http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html). Now, create a new file called nginx-wsgi.conf. This contains the Nginx configuration needed to serve our application and the static content:

location /{
   include uwsgi_params;
   uwsgi_pass 127.0.0.1:9090;
}
location /static/uploads/{
   alias <Some static absolute path>/flask_test_uploads/;
}

In the preceding code block, uwsgi_pass specifies the uWSGI server that needs to be mapped to the specified location.

Create a soft link of this file to the sites-enabled folder mentioned earlier using the following command:

$ ln -s <path/to/nginx-wsgi.conf> <path/to/sites-enabled>

Edit the nginx.conf file (usually found at /etc/nginx/nginx.conf) to add the following line inside the first server block before the last }:

include <path/to/sites-enabled>/*;

After all of this, reload the Nginx server using the following command:

$ sudo nginx -s reload

Point your browser to http://127.0.0.1/ to see the application that is served via Nginx and uWSGI.

The preceding instructions of this recipe can vary depending on the OS being used and different versions of the same OS can also impact the paths and commands used. Different versions of these packages can also have some variations in usage. Refer to the documentation links provided in the next section.

See also

Deploying with Gunicorn and Supervisor

Gunicorn is a WSGI HTTP server for Unix. It is very simple to implement, ultra light, and fairly speedy. Its simplicity lies in its broad compatibility with various web frameworks.

Supervisor is a monitoring tool that controls various child processes and handles the starting/restarting of these child processes when they exit abruptly due to some reason. It can be extended to control the processes via the XML-RPC API over remote locations without logging in to the server (we won’t discuss this here as it is out of the scope of this book).

One thing to remember is that these tools can be used along with the other tools mentioned in the applications in the previous recipe, such as using Nginx as a proxy server. This is left to you to try on your own.

Getting ready

We will start with the installation of both the packages, that is, gunicorn and supervisor. Both can be directly installed using pip:

$ pip install gunicorn
$ pip install supervisor

How to do it…

To check whether the gunicorn package works as expected, just run the following command from inside our application folder:

$ gunicorn -w 4 -b 127.0.0.1:8000 my_app:app

After this, point your browser to http://127.0.0.1:8000/ to see the application’s home page.

Now, we need to do the same using Supervisor so that this runs as a daemon and will be controlled by Supervisor itself rather than human intervention. First of all, we need a Supervisor configuration file. This can be achieved by running the following command from virtualenv. Supervisor, by default, looks for an etc folder that has a file named supervisord.conf. In system-wide installations, this folder is /etc/, and in virtualenv, it will look for an etc folder in virtualenv and then fall back to /etc/:

$ echo_supervisord_conf > etc/supervisord.conf

The echo_supervisord_conf program is provided by Supervisor; it prints a sample config file to the location specified.

This command will create a file named supervisord.conf in the etc folder. Add the following block in this file:

[program:flask_catalog]
command=<path/to/virtualenv>/bin/gunicorn -w 4 -b 127.0.0.1:8000 my_app:app
directory=<path/to/virtualenv>/flask_catalog_deployment
user=someuser # Relevant user
autostart=true
autorestart=true
stdout_logfile=/tmp/app.log
stderr_logfile=/tmp/error.log

Make a note that one should never run the applications as a root user. This is a huge security flaw in itself as the application crashes, which can harm the OS itself.

How it works…

Now, run the following commands:

$ supervisord
$ supervisorctl status
flask_catalog   RUNNING   pid 40466, uptime 0:00:03

The first command invokes the supervisord server, and the next one gives a status of all the child processes.

The tools discussed in this recipe can be coupled with Nginx to serve as a reverse proxy server. I suggest that you try it by yourself.

Every time you make a change to your application and then wish to restart Gunicorn in order for it to reflect the changes, run the following command:

$ supervisorctl restart all

You can also give specific processes instead of restarting everything:

$ supervisorctl restart flask_catalog

See also

Deploying with Tornado

Tornado is a complete web framework and a standalone web server in itself. Here, we will use Flask to create our application, which is basically a combination of URL routing and templating, and leave the server part to Tornado. Tornado is built to hold thousands of simultaneous standing connections and makes applications very scalable.

Tornado has limitations while working with WSGI applications. So, choose wisely! Read more at http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers.

Getting ready

Installing Tornado can be simply done using pip:

$ pip install tornado

How to do it…

Next, create a file named tornado_server.py and put the following code in it:

from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from my_app import app
 
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(5000)
IOLoop.instance().start()

Here, we created a WSGI container for our application; this container is then used to create an HTTP server, and the application is hosted on port 5000.

How it works…

Run the Python file created in the previous section using the following command:

$ python tornado_server.py

Point your browser to http://127.0.0.1:5000/ to see the home page being served.

We can couple Tornado with Nginx (as a reverse proxy to serve static content) and Supervisor (as a process manager) for the best results. It is left for you to try this on your own.

Using Fabric for deployment

Fabric is a command-line tool in Python; it streamlines the use of SSH for application deployment or system-administration tasks. As it allows the execution of shell commands on remote servers, the overall process of deployment is simplified, as the whole process can now be condensed into a Python file, which can be run whenever needed. Therefore, it saves the pain of logging in to the server and manually running commands every time an update has to be made.

Getting ready

Installing Fabric can be simply done using pip:

$ pip install fabric

We will use the application from the Deploying with Gunicorn and Supervisor recipe. We will create a Fabric file to perform the same process to the remote server.

For simplicity, let’s assume that the remote server setup has been already done and all the required packages have also been installed with a virtualenv environment, which has also been created.

How to do it…

First, we need to create a file called fabfile.py in our application, preferably at the application’s root directory, that is, along with the setup.py and run.py files. Fabric, by default, expects this filename. If we use a different filename, then it will have to be explicitly specified while executing.

A basic Fabric file will look like:

from fabric.api import sudo, cd, prefix, run
 
def deploy_app():
   "Deploy to the server specified"
   root_path = '/usr/local/my_env'
 
   with cd(root_path):
       with prefix("source %s/bin/activate" % root_path):
           with cd('flask_catalog_deployment'):
               run('git pull')
               run('python setup.py install')
 
           sudo('bin/supervisorctl restart all')

Here, we first moved into our virtualenv, activated it, and then moved into our application. Then, the code is pulled from the Git repository, and the updated application code is installed using setup.py install. After this, we restarted the supervisor processes so that the updated application is now rendered by the server.

Most of the commands used here are self-explanatory, except prefix, which wraps all the succeeding commands in its block with the command provided. This means that the command to activate virtualenv will run first and then all the commands in the with block will execute with virtualenv activated. The virtualenv will be deactivated as soon as control goes out of the with block.

How it works…

To run this file, we need to provide the remote server where the script will be executed. So, the command will look something like:

$ fab -H my.remote.server deploy_app

Here, we specified the address of the remote host where we wish to deploy and the name of the method to be called from the fab script.

There’s more…

We can also specify the remote host inside our fab script, and this can be good idea if the deployment server remains the same most of the times. To do this, add the following code to the fab script:

from fabric.api import settings
 
def deploy_app_to_server():
   "Deploy to the server hardcoded"
   with settings(host_string='my.remote.server'):
       deploy_app()

Here, we have hardcoded the host and then called the method we created earlier to start the deployment process.

S3 storage for file uploads

Amazon explains S3 as the storage for the Internet that is designed to make web-scale computing easier for developers. S3 provides a very simple interface via web services; this makes storage and retrieval of any amount of data very simple at any time from anywhere on the Internet. Until now, in our catalog application, we saw that there were issues in managing the product images uploaded as a part of the creating process. The whole headache will go away if the images are stored somewhere globally and are easily accessible from anywhere. We will use S3 for the same purpose.

Getting ready

Amazon offers boto, a complete Python library that interfaces with Amazon Web Services via web services. Almost all of the AWS features can be controlled using boto. It can be installed using pip:

$ pip install boto

How to do it…

Now, we should make some changes to our existing catalog application to accommodate support for file uploads and retrieval from S3.

First, we need to store the AWS-specific configuration to allow boto to make calls to S3. Add the following statements to the application’s configuration file, that is, my_app/__init__.py:

app.config['AWS_ACCESS_KEY'] = 'Amazon Access Key'
app.config['AWS_SECRET_KEY'] = 'Amazon Secret Key'
app.config['AWS_BUCKET'] = 'flask-cookbook'

Next, we need to change our views.py file:

from boto.s3.connection import S3Connection

This is the import that we need from boto. Next, replace the following two lines in create_product():

filename = secure_filename(image.filename)
image.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))

Replace these two lines with:

filename = image.filename
conn = S3Connection(
   app.config['AWS_ACCESS_KEY'], app.config['AWS_SECRET_KEY']
)
bucket = conn.create_bucket(app.config['AWS_BUCKET'])
key = bucket.new_key(filename)
key.set_contents_from_file(image)
key.make_public()
key.set_metadata(
   'Content-Type', 'image/' + filename.split('.')[-1].lower()
)

The last change will go to our product.html template, where we need to change the image src path. Replace the original img src statement with the following statement:

<img src="{{ 'https://s3.amazonaws.com/' + config['AWS_BUCKET'] + 
'/' + product.image_path }}"/>

How it works…

Now, run the application as usual and create a product. When the created product is rendered, the product image will take a bit of time to come up as it is now being served from S3 (and not from a local machine). If this happens, then the integration with S3 has been successfully done.

Deploying with Heroku

Heroku is a cloud application platform that provides an easy and quick way to build and deploy web applications. Heroku manages the servers, deployment, and related operations while developers spend their time on developing applications. Deploying with Heroku is pretty simple with the help of the Heroku toolbelt, which is a bundle of some tools that make deployment with Heroku a cakewalk.

Getting ready

We will proceed with the application from the previous recipe that has S3 support for uploads.

As mentioned earlier, the first step will be to download the Heroku toolbelt, which can be downloaded as per the OS from https://toolbelt.heroku.com/.

Once the toolbelt is installed, a certain set of commands will be available at the terminal; we will see them later in this recipe.

It is advised that you perform Heroku deployment from a fresh virtualenv where we have only the required packages for our application installed and nothing else. This will make the deployment process faster and easier.

Now, run the following command to log in to your Heroku account and sync your machined SSH key with the server:

$ heroku login
Enter your Heroku credentials.
Email: [email protected]
Password (typing will be hidden):
Authentication successful.

You will be prompted to create a new SSH key if one does not exist. Proceed accordingly.

Remember! Before all this, you need to have a Heroku account on available on https://www.heroku.com/.

How to do it…

Now, we already have an application that needs to be deployed to Heroku. First, Heroku needs to know the command that it needs to run while deploying the application. This is done in a file named Procfile:

web: gunicorn -w 4 my_app:app

Here, we will tell Heroku to run this command to run our web application.

There are a lot of different configurations and commands that can go into Procfile. For more details, read the Heroku documentation.

Heroku needs to know the dependencies that need to be installed in order to successfully install and run our application. This is done via the requirements.txt file:

Flask==0.10.1
Flask-Restless==0.14.0
Flask-SQLAlchemy==1.0
Flask-WTF==0.10.0
Jinja2==2.7.3
MarkupSafe==0.23
SQLAlchemy==0.9.7
WTForms==2.0.1
Werkzeug==0.9.6
boto==2.32.1
gunicorn==19.1.1
itsdangerous==0.24
mimerender==0.5.4
python-dateutil==2.2
python-geoip==1.2
python-geoip-geolite2==2014.0207
python-mimeparse==0.1.4
six==1.7.3
wsgiref==0.1.2

This file contains all the dependencies of our application, the dependencies of these dependencies, and so on. An easy way to generate this file is using the pip freeze command:

$ pip freeze > requirements.txt

This will create/update the requirements.txt file with all the packages installed in virtualenv.

Now, we need to create a Git repo of our application. For this, we will run the following commands:

$ git init
$ git add .
$ git commit -m "First Commit"

Now, we have a Git repo with all our files added.

Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo.

Now, we need to create a Heroku application and push our application to Heroku:

$ heroku create
Creating damp-tor-6795... done, stack is cedar
http://damp-tor-6795.herokuapp.com/ | [email protected]:damp-tor-
6795.git
Git remote heroku added
$ git push heroku master

After the last command, a whole lot of stuff will get printed on the terminal; this will indicate all the packages being installed and finally, the application being launched.

How it works…

After the previous commands have successfully finished, just open up the URL provided by Heroku at the end of deployment in a browser or run the following command:

$ heroku open

This will open up the application’s home page. Try creating a new product with an image and see the image being served from Amazon S3.

To see the logs of the application, run the following command:

$ heroku logs

There’s more…

There is a glitch with the deployment we just did. Every time we update the deployment via the git push command, the SQLite database gets overwritten. The solution to this is to use the Postgres setup provided by Heroku itself. I urge you to try this by yourself.

Deploying with AWS Elastic Beanstalk

In the last recipe, we saw how deployment to servers becomes easy with Heroku. Similarly, Amazon has a service named Elastic Beanstalk, which allows developers to deploy their application to Amazon EC2 instances as easily as possible. With just a few configuration options, a Flask application can be deployed to AWS using Elastic Beanstalk in a couple of minutes.

Getting ready

We will start with our catalog application from the previous recipe, Deploying with Heroku. The only file that remains the same from this recipe is requirement.txt. The rest of the files that were added as a part of that recipe can be ignored or discarded for this recipe.

Now, the first thing that we need to do is download the AWS Elastic Beanstalk command-line tool library from the Amazon website (http://aws.amazon.com/code/6752709412171743). This will download a ZIP file that needs to be unzipped and placed in a suitable place, preferably your workspace home.

The path of this tool should be added to the PATH environment so that the commands are available throughout. This can be done via the export command as shown:

$ export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/

This can also be added to the ~/.profile or ~/.bash_profile file using:

export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/

How to do it…

There are a few conventions that need to be followed in order to deploy using Beanstalk. Beanstalk assumes that there will be a file called application.py, which contains the application object (in our case, the app object). Beanstalk treats this file as the WSGI file, and this is used for deployment.

In the Deploying with Apache recipe, we had a file named app.wgsi where we referred our app object as application because apache/mod_wsgi needed it to be so. The same thing happens here too because Amazon, by default, deploys using Apache behind the scenes.

The contents of this application.py file can be just a few lines as shown here:

from my_app import app as application
import sys, logging
logging.basicConfig(stream = sys.stderr)

Now, create a Git repo in the application and commit with all the files added:

$ git init
$ git add .
$ git commit -m "First Commit"

Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo.

Now, we need to deploy to Elastic Beanstalk. Run the following command to do this:

$ eb init

The preceding command initializes the process for the configuration of your Elastic Beanstalk instance. It will ask for the AWS credentials followed by a lot of other configuration options needed for the creation of the EC2 instance, which can be selected as needed. For more help on these options, refer to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html.

After this is done, run the following command to trigger the creation of servers, followed by the deployment of the application:

$ eb start

Behind the scenes, the preceding command creates the EC2 instance (a volume), assigns an elastic IP, and then runs the following command to push our application to the newly created server for deployment:

$ git aws.push

This will take a few minutes to complete. When done, you can check the status of your application using the following command:

$ eb status –verbose

Whenever you need to update your application, just commit your changes using the git and push commands as follows:

$ git aws.push

How it works…

When the deployment process finishes, it gives out the application URL. Point your browser to it to see the application being served.

Yet, you will find a small glitch with the application. The static content, that is, the CSS and JS code, is not being served. This is because the static path is not correctly comprehended by Beanstalk. This can be simply fixed by modifying the application’s configuration on your application’s monitoring/configuration page in the AWS management console. See the following screenshots to understand this better:

Flask Framework Cookbook

Click on the Configuration menu item in the left-hand side menu.

Flask Framework Cookbook

Notice the highlighted box in the preceding screenshot. This is what we need to change as per our application. Open Software Settings.

Flask Framework Cookbook

Change the virtual path for /static/, as shown in the preceding screenshot.

After this change is made, the environment created by Elastic Beanstalk will be updated automatically, although it will take a bit of time. When done, check the application again to see the static content also being served correctly.

Application monitoring with Pingdom

Pingdom is a website-monitoring tool that has the USP of notifying you as soon as your website goes down. The basic idea behind this tool is to constantly ping the website at a specific interval, say, 30 seconds. If a ping fails, it will notify you via an e-mail, SMS, tweet, or push notifications to mobile apps, which inform that your site is down. It will keep on pinging at a faster rate until the site is back up again. There are other monitoring features too, but we will limit ourselves to uptime checks in this book.

Getting ready

As Pingdom is a SaaS service, the first step will be to sign up for an account. Pingdom offers a free trial of 1 month in case you just want to try it out. The website for the service is https://www.pingdom.com.

We will use the application deployed to AWS in the Deploying with AWS Elastic Beanstalk recipe to check for uptime. Here, Pingdom will send an e-mail in case the application goes down and will send an e-mail again when it is back up.

How to do it…

After successful registration, create a check for time. Have a look at the following screenshot:

Flask Framework Cookbook

As you can see, I already added a check for the AWS instance. To create a new check, click on the ADD NEW button. Fill in the details asked by the form that comes up.

How it works…

After the check is successfully created, try to break the application by consciously making a mistake somewhere in the code and then deploying to AWS. As soon as the faulty application is deployed, you will get an e-mail notifying you of this. This e-mail will look like:

Flask Framework Cookbook

Once the application is fixed and put back up again, the next e-mail should look like:

Flask Framework Cookbook

You can also check how long the application has been up and the downtime instances from the Pingdom administration panel.

Application performance management and monitoring with New Relic

New Relic is an analytics software that provides near real-time operational and business analytics related to your application. It provides deep analytics on the behavior of the application from various aspects. It does the job of a profiler as well as eliminating the need to maintain extra moving parts in the application. It actually works in a scenario where our application sends data to New Relic rather than New Relic asking for statistics from our application.

Getting ready

We will use the application from the last recipe, which is deployed to AWS.

The first step will be to sign up with New Relic for an account. Follow the simple signup process, and upon completion and e-mail verification, it will lead to your dashboard. Here, you will have your license key available, which we will use later to connect our application to this account. The dashboard should look like the following screenshot:

Flask Framework Cookbook

Here, click on the large button named Reveal your license key.

How to do it…

Once we have the license key, we need to install the newrelic Python library:

$ pip install newrelic

Now, we need to generate a file called newrelic.ini, which will contain details regarding the license key, the name of our application, and so on. This can be done using the following commands:

$ newrelic-admin generate-config LICENSE-KEY newrelic.ini

In the preceding command, replace LICENSE-KEY with the actual license key of your account. Now, we have a new file called newrelic.ini. Open and edit the file for the application name and anything else as needed.

To check whether the newrelic.ini file is working successfully, run the following command:

$ newrelic-admin validate-config newrelic.ini

This will tell us whether the validation was successful or not. If not, then check the license key and its validity.

Now, add the following lines at the top of the application’s configuration file, that is, my_app/__init__.py in our case. Make sure that you add these lines before anything else is imported:

import newrelic.agent
newrelic.agent.initialize('newrelic.ini')

Now, we need to update the requirements.txt file. So, run the following command:

$ pip freeze > requirements.txt

After this, commit the changes and deploy the application to AWS using the following command:

$ git aws.push

How it works…

Once the application is successfully updated on AWS, it will start sending statistics to New Relic, and the dashboard will have a new application added to it.

Flask Framework Cookbook

Open the application-specific page, and a whole lot of statistics will come across. It will also show which calls have taken the most amount of time and how the application is performing. You will also see multiple tabs that correspond to a different type of monitoring to cover all the aspects.

Summary

In this article, we have seen the various techniques used to deploy and monitor Flask applications.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here