Efficiency and scalability
web2py is designed to be easy to deploy and to setup. This does not mean that it compromises on efficiency or scalability, but it means you may need to tweak it to make it scalable.
In this section we assume multiple web2py installations behind a NAT server that provides local load-balancing.
In this case, web2py works out-of-the-box if some conditions are met. In particular, all instances of each web2py application must access the same database servers and must see the same files. This latter condition can be implemented by making the following folders shared:
applications/myapp/sessions
applications/myapp/errors
applications/myapp/uploads
applications/myapp/cache
The shared folders must support file locking. Possible solutions are ZFS (ZFS was developed by Sun Microsystems and is the preferred choice.), NFS (With NFS you may need to run thenlockmgr
daemon to allow file locking.), or Samba (SMB).
It is possible to share the entire web2py folder or the entire applications folder, but this is not a good idea because this would cause a needless increase of network bandwidth usage.
We believe the configuration discussed above to be very scalable because it reduces the database load by moving to the shared filesystems those resources that need to be shared but do not need transactional safety (only one client at a time is supposed to access a session file, cache always needs a global lock, uploads and errors are write once/read many files).
Ideally, both the database and the shared storage should have RAID capability. Do not make the mistake of storing the database on the same storage as the shared folders, or you will create a new bottleneck there.
On a case-by-case basis, you may need to perform additional optimizations and we will discuss them below. In particular, we will discuss how to get rid of these shared folders one-by-one, and how to store the associated data in the database instead. While this is possible, it is not necessarily a good solution. Nevertheless, there may be reasons to do so. One such reason is that sometimes we do not have the freedom to set up shared folders.
Efficiency tricks
web2py application code is executed on every request, so you want to minimize this amount of code. Here is what you can do:
- Run once with
migrate=True
then set all your tables tomigrate=False
. - Bytecode compile your app using admin.
- Use
cache.ram
as much as you can but make sure to use a finite set of keys, or else the amount of cache used will grow arbitrarily. - Minimize the code in models: do not define functions there, define functions in the controllers that need them or - even better - define functions in modules, import them and use those functions as needed.
- Do not put many functions in the same controller but use many controllers with few functions.
- Call
session.forget(response)
in all controllers and/or functions that do not change the session. - Try to avoid web2py cron, and use a background process instead. web2py cron can start too many Python instances and cause excessive memory usage.
Sessions in database
It is possible to instruct web2py to store sessions in a database instead of in the sessions folder. This has to be done for each individual web2py application, although they may all use the same database to store sessions.
Given a database connection
db = DAL(...)
you can store the sessions in this database (db) by simply stating the following, in the same model file that establishes the connection:
session.connect(request, response, db)
If it does not exist already, web2py creates, under the hood, a table in the database called web2py_session_
appname containing the following fields:
Field('locked', 'boolean', default=False),
Field('client_ip'),
Field('created_datetime', 'datetime', default=request.now),
Field('modified_datetime', 'datetime'),
Field('unique_key'),
Field('session_data', 'text')
“unique_key” is a uuid key used to identify the session in the cookie. “session_data” is the cPickled session data.
To minimize database access, you should avoid storing sessions when they are not needed with:
session.forget()
Sessions are automatically forgotten if unchanged.
With sessions in database, “sessions” folder does not need to be a shared folder because it will no longer be accessed.
Notice that, if sessions are disabled, you must not pass the
session
toform.accepts
and you cannot usesession.flash
nor CRUD.
HAProxy a high availability load balancer
If you need multiple web2py processes running on multiple machines, instead of storing sessions in the database or in cache, you have the option to use a load balancer with sticky sessions.
Pound[pound] and HAProxy[haproxy] are two HTTP load balancers and Reverse proxies that provides sticky sessions. Here we discuss the latter because it seems to be more common on commercial VPS hosting.
By sticky sessions, we mean that once a session cookie has been issued, the load balancer will always route requests from the client associated to the session, to the same server. This allows you to store the session in the local filesystem without need for a shared filesystem.
To use HAProxy:
First, install it, on out Ubuntu test machine:
sudo apt-get -y install haproxy
Second edit the configuration file “/etc/haproxy.cfg” to something like this:
## this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
maxconn 1024
daemon
defaults
log global
mode http
option httplog
option httpchk
option httpclose
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen 0.0.0.0:80
balance url_param WEB2PYSTICKY
balance roundrobin
server L1_1 10.211.55.1:7003 check
server L1_2 10.211.55.2:7004 check
server L1_3 10.211.55.3:7004 check
appsession WEB2PYSTICKY len 52 timeout 1h
The listen
directive tells HAProxy, which port to wait for connection from. The server
directive tells HAProxy where to find the proxied servers. The appsession
directory makes a sticky session and uses the a cookie called WEB2PYSTICKY
for this purpose.
Third, enable this config file and start HAProxy:
/etc/init.d/haproxy restart
You can find similar instructions to setup Pound at the URL
http://web2pyslices.com/main/slices/take_slice/33
Cleaning up sessions
You should be aware that on a production environment, sessions pile up fast. web2py provides a script called:
scripts/sessions2trash.py
that when run in the background, periodically deletes all sessions that have not been accessed for a certain amount of time. Web2py provides a script to cleanup these sessions (it works for both file-based sessions and database sessions).
Here are some typical use cases:
- Delete expired sessions every 5 minutes:
nohup python web2py.py -S app -M -R scripts/sessions2trash.py &
or in Windows, use nssm as described above in the scheduler section. You will probably need to include the full path to both web2py.py and the scripts folder, and the trailing & is not needed.
- Delete sessions older than 60 minutes regardless of expiration, with verbose output, then exit:
python web2py.py -S app -M -R scripts/sessions2trash.py -A -o -x 3600 -f -v
- Delete all sessions regardless of expiry and exit:
python web2py.py -S app -M -R scripts/sessions2trash.py -A -o -x 0
session2trash.py has it own specific command line options that can be passed while launching web2py shell with the
command line.
NOTE: They must be preceeded by web2py command line option "-A" for them to be passed on to the script.
-f, --force Ignore session expiration. Force expiry based on -x option or auth.settings.expiration.
-o, --once Delete sessions, then exit. Essential when trigger trash sessions from system CRON JOB
-s SECONDS, --sleep Number of seconds to sleep between executions. Default 300.
-v, --verbose print verbose output, a second -v increases verbosity
-x SECONDS, --expiration
Expiration value for sessions without expiration (in seconds)
- One last example if you want to launch sessions2trash.py from system CRON JOB and delete all expired sessions and exit:
python web2py.py -S app -M -R scripts/sessions2trash.py -C -A -o
In the previous examples app
is the name of your application.
Uploading files in database
By default, all uploaded files handled by SQLFORMs are safely renamed and stored in the filesystem under the “uploads” folder. It is possible to instruct web2py to store uploaded files in the database instead.
Now, consider the following table:
db.define_table('dog',
Field('name')
Field('image', 'upload'))
where dog.image
is of type upload. To make the uploaded image go in the same record as the name of the dog, you must modify the table definition by adding a blob field and link it to the upload field:
db.define_table('dog',
Field('name')
Field('image', 'upload', uploadfield='image_data'),
Field('image_data', 'blob'))
Here “image_data” is just an arbitrary name for the new blob field.
Line 3 instructs web2py to safely rename uploaded images as usual, store the new name in the image field, and store the data in the uploadfield called “image_data” instead of storing the data on the filesystem. All of this is be done automatically by SQLFORMs and no other code needs to be changed.
With this tweak, the “uploads” folder is no longer needed.
On Google App Engine, files are stored by default in the database without the need to define an uploadfield, since one is created by default.
Collecting tickets
By default, web2py stores tickets (errors) on the local file system. It would not make sense to store tickets directly in the database, because the most common origin of error in a production environment is database failure.
Storing tickets is never a bottleneck, because this is ordinarily a rare event. Hence, in a production environment with multiple concurrent servers, it is more than adequate to store them in a shared folder. Nevertheless, since only the administrator needs to retrieve tickets, it is also OK to store tickets in a non-shared local “errors” folder and periodically collect them and/or clear them.
One possibility is to periodically move all local tickets to the database.
For this purpose, web2py provides the following script:
scripts/tickets2db.py
By default the script gets the db uri from a file saved into the private folder, ticket_storage.txt. This file should contain a string that is passed directly to a DAL instance, like:
mysql://username:password@localhost/test
postgres://username:password@localhost/test
...
This allows to leave the script as it is: if you have multiple applications, it will dynamically choose the right connection for every application. If you want to hardcode the uri in it, edit the second reference to db_string, right after the except line. You can run the script with the command:
nohup python web2py.py -S myapp -M -R scripts/tickets2db.py &
where myapp is the name of your application.
This script runs in the background and moves all tickets every 5 minutes to a table and removes the local tickets. You can later view the errors using the admin app, clicking on the “switch to: db” button at the top, with the same exact functionality as if they were stored on the file system.
With this tweak, the “errors” folder does not need to be a shared folder any more, since errors will be stored into the database.
Memcache
We have shown that web2py provides two types of cache: cache.ram
and cache.disk
. They both work on a distributed environment with multiple concurrent servers, but they do not work as expected. In particular, cache.ram
will only cache at the server level; thus it becomes useless. cache.disk
will also cache at the server level unless the “cache” folder is a shared folder that supports locking; thus, instead of speeding things up, it becomes a major bottleneck.
The solution is not to use them, but to use memcache instead. web2py comes with a memcache API.
To use memcache, create a new model file, for example 0_memcache.py
, and in this file write (or append) the following code:
from gluon.contrib.memcache import MemcacheClient
memcache_servers = ['127.0.0.1:11211']
cache.memcache = MemcacheClient(request, memcache_servers)
cache.ram = cache.disk = cache.memcache
The first line imports memcache. The second line has to be a list of memcache sockets (server:port). The third line defines cache.memcache
. The fourth line redefines cache.ram
and cache.disk
in terms of memcache.
You could choose to redefine only one of them to define a totally new cache object pointing to the Memcache object.
With this tweak the “cache” folder does not need to be a shared folder any more, since it will no longer be accessed.
This code requires having memcache servers running on the local network. You should consult the memcache documentation for information on how to setup those servers.
Sessions in memcache
If you do need sessions and you do not want to use a load balancer with sticky sessions, you have the option to store sessions in memcache:
from gluon.contrib.memdb import MEMDB
session.connect(request, response, db=MEMDB(cache.memcache))
Caching with Redis
[redis]
An alternative to Memcache is use Redis.
Assuming we have Redis installed and running on localhost at port 6379, we can connect to it using the following code (in a model):
from gluon.contrib.redis_utils import RConn
from gluon.contrib.redis_cache import RedisCache
rconn = RConn('localhost', 6379)
cache.redis = RedisCache(redis_conn=rconn, debug=True)
We can now use cache.redis
in place of (or along with) cache.ram
and cache.disk
.
We can also obtain Redis statistics by calling:
cache.redis.stats()
Redis cache subsystem allows you to prevent the infamous “thundering herd problem”: this is not active by default because usually you choose redis for speed, but at a negligible cost you can make sure that only one thread/process can set a value concurrently. To activate this behaviour, just pass the with_lock=True
param to the RedisCache
call. You can also enable the behaviour “on-demand” with value = cache.redis('mykey', lambda: time.time(), with_lock=True)
Sessions in Redis
If you have Redis in your stack, why not use it for sessions ?
from gluon.contrib.redis_utils import RConn
from gluon.contrib.redis_session import RedisSession
rconn = RConn()
sessiondb = RedisSession(redis_conn=rconn, session_expiry=False)
session.connect(request, response, db=sessiondb)
The code has been tested with ~1M sessions. As long as Redis can fit in memory, the time taken to handle 1 or 1M sessions is the same. While against file-based sessions or db-based sessions the speedup is unnoticeable for ~40K sessions, over that barrier the improvement is remarkable. A big improvement can be also noticed when you’re running a “farm” of web2py instances, because sharing the sessions folder or having multiple processes connected to a database often hogs down the system. You’ll end up with 1 key per session, plus 2 keys, one holding an integer (needed for assigning different session keys) and the other holding the set of all sessions generated (so for 1000 sessions, 1002 keys).
If session_expiry
is not set, sessions will be handled as usual, you’d need to cleanup sessions as usual once a while.
However, when session_expiry
is set will delete automatically sessions after n seconds (e.g. if set to 3600, session will expire exactly one hour later having been updated the last time), you should occasionally run sessions2trash.py just to clean the key holding the set of all the sessions previously issued (for ~1M sessions, cleaning up requires 3 seconds). The redis backend for sessions is the only one that can prevent concurrent modifications to the same session: this is especially true for ajax-intensive applications that write to sessions often in a semi-concurrent way. To favour speed this is by default not enforced, however if you want to turn on the locking behaviour, just turn it on with with_lock=True
parameter passed to the RedisSession
object.
Removing applications
In a production setting, it may be better not to install the default applications: admin, examples and welcome. Although these applications are quite small, they are not necessary.
Removing these applications is as easy as deleting the corresponding folders under the applications folder.
Using replicated databases
In a high performance environment you may have a master-slave database architecture with many replicated slaves and perhaps a couple of replicated servers. The DAL can handle this situation and conditionally connect to different servers depending on the request parameters. The API to do this was described in Chapter 6. Here is an example:
from random import sample
db = DAL(sample(['mysql://...1', 'mysql://...2', 'mysql://...3'], 3))
In this case, different HTTP requests will be served by different databases at random, and each DB will be hit more or less with the same probability.
We can also implement a simple Round-Robin
def fail_safe_round_robin(*uris):
i = cache.ram('round-robin', lambda: 0, None)
uris = uris[i:]+uris[:i] # rotate the list of uris
cache.ram('round-robin', lambda: (i+1)%len(uris), 0)
return uris
db = DAL(fail_safe_round_robin('mysql://...1', 'mysql://...2', 'mysql://...3'))
This is fail-safe in the sense that if the database server assigned to the request fails to connect, DAL will try the next one in the order.
It is also possible to connect to different databases depending on the requested action or controller. In a master-slave database configuration, some action performs only a read and some person both read/write. The former can safely connect to a slave db server, while the latter should connect to a master. So you can do:
if request.function in read_only_actions:
db = DAL(sample(['mysql://...1', 'mysql://...2', 'mysql://...3'], 3))
elif request.action in read_only_actions:
db = DAL(shuffle(['mysql://...1', 'mysql://...2', 'mysql://...3']))
else:
db = DAL(sample(['mysql://...3', 'mysql://...4', 'mysql://...5'], 3))
where 1, 2, 3 are slaves and 3, 4, 5 are masters.
Compress static files
Browsers can decompress content on-the-fly, so compressing content for those browsers saves your bandwidth and theirs, lowering response times. Nowadays most web servers can compress your content on the fly and send it to the browsers requesting gzipped content. However, for static files, you are wasting CPU cycles to compress the same content over and over.
You can use scripts/zip_static_files.py to create gzipped versions of your static files and serve those without wasting CPU. Run as python web2py.py -S myapp -R scripts/zip_static_files.py
in cron. The script takes care to create (or update) the gzipped version and saves them along with your files, appending a .gz to their name. You just need to let your webserver know when to send those files [apache-content-negotiation] [nginx-gzipstatic]