Share data between handles

Sometimes applications need to share data between transfers. All easy handles
added to the same multi handle automatically get a lot of sharing done between
the handles in that same multi handle, but sometimes that’s not exactly what
you want.

Multi handle

All easy handles added to the same multi handle automatically share
cookies, connection
cache
, dns cache and SSL
session id cache.

Sharing between easy handles

libcurl has a generic “sharing interface”, where the application creates a
“share object” that then holds data that can be shared by any number of easy
handles. The data is then stored and read from the shared object instead of
kept within the handles that are sharing it.

  1. CURLSH *share = curl_share_init();

The shared object can be set to share all or any of cookies, connection cache,
dns cache and SSL session id cache.

For example, setting up the share to hold cookies and dns cache:

  1. curl_share_setopt(share, CURLSHOPT_SHARE, CURL_LOCK_DATA_COOKIE);
  2. curl_share_setopt(share, CURLSHOPT_SHARE, CURL_LOCK_DATA_DNS);

… and then you setup the corresponding transfer to use this share object:

  1. curl_easy_setopt(curl, CURLOPT_SHARE, share);

Transfers done with this curl handle will thus use and store its cookie and
dns information in the share handle. You can set several easy handles to
share the same share object.

What to share

CURL_LOCK_DATA_COOKIE - set this bit to share cookie jar. Note that each
easy handle still needs to get its cookie “engine” started properly to start
using cookies.

CURL_LOCK_DATA_DNS - the DNS cache is where libcurl stores addresses for
resolved host names for a while to make subsequent lookups faster.

CURL_LOCK_DATA_SSL_SESSION - the SSL session ID cache is where libcurl store
resume information for SSL connections to be able to resume a previous
connection faster.

CURL_LOCK_DATA_CONNECT - when set, this handle will use a shared connection
cache and thus will probably be more likely to find existing connections to
re-use etc, which may result in faster performance when doing multiple
transfers to the same host in a serial manner.

Locking

If you want have the share object shared by transfers in a multi-threaded
environment. Perhaps you have a CPU with many cores and you want each core to
run its own thread and transfer data, but you still want the different
transfers to share data. Then you need to set the mutex callbacks.

If you don’t use threading and you know you access the shared object in a
serial one-at-a-time manner you don’t need to set any locks. But if there is
ever more than one transfer that access share object at a time, it needs to
get mutex callbacks setup to prevent data destruction and possibly even
crashes.

Since libcurl itself doesn’t know how to lock things or even what threading
model you’re using, you must make sure to do mutex locks that only allows one
access at a time. A lock callback for a pthreads-using application could look
similar to:

  1. static void lock_cb(CURL *handle, curl_lock_data data,
  2. curl_lock_access access, void *userptr)
  3. {
  4. pthread_mutex_lock(&lock[data]); /* uses a global lock array */
  5. }
  6. curl_share_setopt(share, CURLSHOPT_LOCKFUNC, lock_cb);

With the corresponding unlock callback could look like:

  1. static void unlock_cb(CURL *handle, curl_lock_data data,
  2. void *userptr)
  3. {
  4. pthread_mutex_unlock(&lock[data]); /* uses a global lock array */
  5. }
  6. curl_share_setopt(share, CURLSHOPT_UNLOCKFUNC, unlock_cb);

Unshare

TBD