This document provides an introduction to the native C++ Xapian API. This API provides programmers with the ability to search through (potentially very large) bodies of data using probabilistic methods.
Note: The portion of the API currently documented here covers only the part of Xapian concerned with searching through existing databases, not that concerned with creating them.
It is probably a good idea to read the Introduction to Information Retrieval and Installing Xapian before reading this document, or at least before attempting to use the API. You may also wish to read the QuickStart reference, for some simple worked examples of Xapian usage.
This document does not detail the exact calling conventions (parameters passed, return value, exceptions thrown, etc...) for each method in the API. For such documentation, you should refer to the automatically extracted documentation, which is generated from detailed comments in the source code, and should thus remain up-to-date and accurate. This documentation is generated using the Doxygen application. To save you having to generate this documentation yourself, we include the built version in our distributions, and also keep the latest version on our website.
Error reporting is often relegated to the back of manuals such as this. However, it is extremely important to understand the errors which may be caused by the operations which you are trying to perform.
This becomes particularly relevant when using a large system, with such possibilities as databases which are being updated while you search through them, and distributed enquiry systems.
Errors in Xapian are all reported by means of exceptions. All exceptions
thrown by Xapian will be subclasses of
OmError
. Note that
OmError
is an abstract class; thus you must catch exceptions
by reference rather than by value.
There are two flavours of error, derived from OmError
:
OmLogicError
- for error conditions due to programming errors, such as a misuse of the
API. A finished application should not receive these errors (though it
would still be sensible to catch them).
OmRuntimeError
- for error conditions due to run time problems, such as failure to open
a database. You must always be ready to cope with such errors.
Each of these flavours is further subdivided, such that any particular
error condition can be trapped by catching the appropriate exception.
If desired, a human readable explanation of the error can be retrieved
by calling
OmError::get_msg()
.
In addition, standard system errors may occur: these will be reported by
throwing appropriate exceptions. Most notably, if the system runs out
of memory, a std::bad_alloc()
exception will be thrown.
These may also occasionally be called Indexes. In Xapian (as opposed to a database package) a database consists of little more than indexed documents: this reflects the purpose of Xapian as an information retrieval system, rather than an information storage system.
The exact contents of a database depend on the type (see "Database Types" for more details of the database types currently provided).
The information to be searched for is specified by a Query. In Xapian, queries are made up of a structured boolean tree, upon which probabilistic weightings are imposed: when the search is performed, the documents returned are filtered according to the boolean structure, and weighted (and sorted) according to the probabilistic model of information retrieval.
The user of Xapian does not usually need to worry about how Xapian performs its memory allocation: Xapian objects can all be created and deleted as any other C++ objects. The convention is that whoever creates an object is ultimately responsible for deleting it. This becomes relevant when passing a pointer to data to Xapian: Xapian will not assume that such pointers remain valid across separate API calls, and it will be the callers responsibility to delete the object pointed to, as and when required.
The OmEnquire()
class
is central to all searching operations. It provides an interface for
A typical enquiry session will consist of most of these operations, in various orders. The OmEnquire class presents as few restrictions as possible on the order in which operations should be performed. Although you must set the query before any operation which uses it, you can call any of the other methods in any order.
Many operations performed by the OmEnquire class are performed lazily (ie, just before their results are needed). This need not concern the user except to note that, as a result, errors may not be reported as soon as would otherwise be expected. In particular, errors regarding opening of the database may be reported when a query is performed (although they may not: you should catch exceptions in both situations).
When creating an OmEnquire object, a database to search must be specified.
Databases are specified by creating an OmDatabase
object.
This is done using a factory function such as OmQuartz__open()
- there's one of these for each backend type. The parameters the function
takes depend on the backend type, and whether we are creating a read-only
or a writable database.
stub databases aren't currently working. An extra feature available using the auto backend is "stub databases". If the fiilename points to a file rather than a directory, the file will be assumed to contain a set of settings (one per line, in the format "name=value"), which will be used to open the database. For example, a file might contain:
backend=remote
remote_type=tcp
remote_port=23876
remote_server=localhost
Note that these settings are case sensitive.
quartz |
This is the main database type, which should be used in almost all cases. The format allows progressive modifications, single-writer multiple-reader access to the database, and highly efficient and scalable access to data.
You access a Quartz database using OmDatabase and OmWritableDatabase objects
constructed by the factory function |
da_flimsy |
This is a proprietary, legacy format, holding a database in a non-updateable form (ie, the database can't be altered, it is built from an existing database). We support read-only access to this, and it is thus unlikely to be useful outside our company. This takes one, two or three parameters. If one parameter is supplied, it represents the path to a directory containing the Record file in a file called "R", the Term file in a file called "T", and optionally the fast access key file in a file called "keyfile" If two parameters are supplied, they represent the full paths to the Record and Term file, respectively. In this case, there is assumed to be no keyfile. If three parameters are supplied, the first two are the full paths to the Record and Term files, respectively, and the third is the full path to the keyfile. |
da_heavy |
This is a similar to da_flimsy, allowing access to the "heavy duty"
variant for larger documents. This is the format produced by the
"makeda" utility, and is thus useful while new file formats have
not been created for the new development.
It takes the same parameters as da_flimsy. |
db_flimsy |
This is a proprietary, legacy format, holding a database in a dynamically updateable form (ie, the database can be altered while queries are being performed on it.) We support read-only access to this, and it is thus unlikely to be useful outside our company. This takes one or two parameters. The first parameter is the full path to the DB file. If a second parameter is supplied, it represents the full path to the fast access keyfile. If a second parameter is not supplied, the keyfile will be searched for at <first_parameter>_keyfile; if this doesn't exist, no keyfile will be used. It takes one parameter, which is the full path to the DB file. |
db_heavy |
This is a similar to db_flimsy, allowing access to the "heavy duty"
variant for larger documents. We support read-only access to this,
and it is thus unlikely to be useful outside our company.
It takes the same parameters as db_flimsy. |
inmemory |
This type is a database held entirely in memory.
It is really intended to be used for testing purposes only, but may
occasionally prove useful for building up temporary small databases.
It takes no parameters at all, except for some undocumented parameters which cause special effects for testing. |
Xapian can search across several databases as easily as searching across a
single one. Simply call
OmDatabase::add_database()
for each database that you wish to search through.
Other operations, such as setting the query, may be performed before or after this call. It is even possible to perform a query, add a further database, and then perform the query again to get the results with the extra database (although this isn't very likely to be useful in practice).
Xapian implements both boolean and probabilistic searching. There are two obvious ways in which a pure boolean query can be combined with a pure probabilistic query:
Suppose for example the boolean query is being used to retrieve documents in English in a database containing English and French documents. A word like "grand", exists in both languages (with similar meanings), but is commoner in French than English. In the English subset it could therefore be expected to have a higher weight than it would get in the joint English and French databases.
In fact Xapian, as described below, goes for the second approach, which can be implemented very efficiently, despite the the fact that the first is more exact.
In reality, Xapian performs the combined boolean and probabilistic searches simultaneously. This allows various optimisations to be performed, such as giving up on calculating a boolean AND operation when the probabilistic weights that could result from further documents can have no effect on the result set. These optimisations have been found to give a two- or three-fold performance increase in certain cases. The performance is particularly good for queries containing many terms.
All queries are represented by
OmQuery()
objects. The simplest possible (non-trivial) query is one which searches
for a single term. This can be created as follows (where tname
is the term to be searched for):
OmQuery query(tname);
A term in Xapian is represented simply by a string of binary characters. Usually, when searching text, these characters will be the word which the term represents, but during the information retrieval process Xapian attaches no specific meaning to the term.
This constructor actually takes a couple of extra parameters, which may be used to specify positional and frequency information for terms in the query:
OmQuery(const om_termname & tname_, om_termcount wqf_ = 1, om_termpos term_pos_ = 0)
The wqf
(Within Query Frequency) is
a measure of how common a term is in the query. This is particularly useful
when generating a query from an existing document, but may also be used
as a crude way of increasing the importance of a term in a query. Note that,
if the intention is simply to ensure that a particular term is in the query
results, you should use a boolean AND rather than set a high wqf.
The term_pos
represents the position of the term in the query.
This is used for phrase searching, passage retrieval, and other operations
which require knowledge of the order of terms in the query (such as returning
the set of matching terms in a given document in the same order as they
occur in the query). If such operations are not required, the default
value of 0 may be used.
Note that it may not make much sense to specify a wqf other than 1 when supplying a term position (unless you are trying to affect the weighting, as previously described).
Note also that the results of OmQuery(tname, 2)
and
OmQuery(OmQuery::OP_OR, OmQuery(tname), OmQuery(tname))
are exactly equivalent.
Out of single term queries, compound queries can be built up. A compound is made up from two sub-queries with a connecting operator, where each sub-query is a compound query or a single term query. This is done using the following constructor:
OmQuery(om_queryop op_, const OmQuery & left, const OmQuery & right)
The two most commonly used operators are OmQuery::OP_AND
and
OmQuery::OP_OR
, which enable us to construct boolean queries made
up from the usual AND and OR operations. But in addition to this, a
probabilistic query in its simplest form, where we have a list of terms
which give rise to weights that need to be added together, is also made up
from a set of terms joined together with OmQuery::OP_OR
.
The full set of available om_queryop
operators is:
OmQuery::OP_AND | Return documents returned by both subqueries. |
OmQuery::OP_OR | Return documents returned by either subquery. |
OmQuery::OP_AND_NOT | Return documents returned by the left subquery but not the right subquery. |
OmQuery::OP_FILTER | As OmQuery::OP_AND, but use only weights from left subquery. |
OmQuery::OP_AND_MAYBE | Return documents returned by the left subquery, but adding document weights from both subqueries. |
OmQuery::OP_XOR | Return documents returned by one subquery only. |
Each term, t, in the query has a weight, wQ(t), given by
(K' + 1) f't wQ(t) = ------------- w(t) K'L' + f'twhere f't is the wqf of t in the query, L' is the nql, or normalised query length, and K' is a constant. And the weight w(t) is given by,
(r + h) (N - R - n + r + h) w(t) = log ---------------------------, where h = 1/2 (R - r + h) (n - r + h)See the Introduction to Information Retrieval for a full discussion. For any particular document, D, if t indexes D, there is a weight wD(t), which is the contribution, or partial score, of term t to the total score for document D, and it is given by,
(K + 1) ft wD(t) = ---------- wQ(t) KL + ft
A query can be thought of as a tree structure. At each node is
an om_queryop
operator, and on the left and right branch are two other queries.
At each leaf node is a term, t, transmitting documents and scores, D and
wD(t),
up the tree.
An OmQuery::OP_OR node transmits documents from both branches up the tree, summing the scores when a document is found in both the left and right branch. For example,
docs 1 8 12 16 17 18 scores 7.3 4.1 3.2 7.6 3.8 4.7 ... | | OmQuery::OP_OR / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...An OmQuery::OP_AND node transmits only the documents found on both branches up the tree, again summing the scores,
docs 1 16 scores 7.3 7.6 ... | | OmQuery::OP_AND / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...An OmQuery::OP_AND_NOT node transmits up the tree the documents on the left branch which are not on the right branch. The scores are taken from the left branch. For example, again summing the scores,
docs 12 17 scores 3.2 3.8 ... | | OmQuery::OP_AND_NOT / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...An OmQuery::OP_MAYBE node transmits the documents up the tree from the left branch only, but adds in the score from the right branch for documents which occur on both branches. For example,
docs 1 12 16 17 scores 7.3 3.2 7.6 3.8 ... | | OmQuery::OP_AND_MAYBE / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...OmQuery::OP_FILTER is like OmQuery::OP_AND, but weights are only transmitted from the left branch. For example,
docs 1 16 scores 3.1 3.1 ... | | OmQuery::OP_FILTER / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...OmQuery::OP_XOR is like OmQuery::OP_OR, but documents on both left and right branches are not transmitted up the tree. For example,
docs 8 12 17 18 scores 4.1 3.2 3.8 4.7 ... | | OmQuery::OP_XOR / \ / \ / \ / \ docs 1 12 16 17 1 8 16 18 scores 3.1 3.2 3.1 3.8 ... 4.2 4.1 4.5 4.7 ...OmQuery::OP_XOR is used internally, but we have not found a plausible use for it in query construction, so it will not be mentioned again.
A query can therefore be thought of as a process for generating an M set from the terms at the leaf nodes of the query. Each leaf node gives rise to a posting list of documents with scores. Each higher level node gives rise to a similar list, and the root node of the tree contains the final set of documents with scores (or weights), which are candidates for going into the M set. The M set contains the documents which get the highest weights, and they are held in the M set in weight order.
It is important to realise that within Xapian the structure of a query is optimised for best performance, and it undergoes various transformations as the query progresses. The precise way in which the query is built up is therefore of little importance.
OmQuery query(); // undefined query; see next section query = OmQuery(OmQuery::OP_OR, query, OmQuery("regulation")); query = OmQuery(OmQuery::OP_OR, query, OmQuery("import")); query = OmQuery(OmQuery::OP_OR, query, OmQuery("export")); query = OmQuery(OmQuery::OP_OR, query, OmQuery("canned")); query = OmQuery(OmQuery::OP_OR, query, OmQuery("fish"));This creates a probabilistic query with terms `regulation', `import', `export', `canned' and `fish'.
In fact this style of creation is so common that there is the shortcut construction:
vector <om_termname> terms; terms.push_back("regulation"); terms.push_back("import"); terms.push_back("export"); terms.push_back("canned"); terms.push_back("fish"); OmQuery query(OmQuery::OP_OR, terms.begin(), terms.end());Suppose now we have this Boolean query,
('EEC' - 'France') and ('1989' or '1991' or '1992') and 'Corporate Law'This could be built up as bquery like this,
OmQuery bquery1(OmQuery::OP_AND_NOT, "EEC", "France"); OmQuery bquery2("1989"); bquery2 = OmQuery(OmQuery::OP_OR, bquery2, "1991"); bquery2 = OmQuery(OmQuery::OP_OR, bquery2, "1992"); OmQuery bquery3("Corporate Law"); OmQuery bquery(OmQuery::OP_AND, bquery1, OmQuery(OmQuery::OP_AND(bquery2, bquery3)));and this can be attached as a filter to
query
to run the
probabilistic query with a Boolean filter,
query = OmQuery(OmQuery::OP_FILTER, query, bquery);This is the general technique for processing boolean queries, so to run a pure boolean query, attach it as a filter to an undefined query:
bquery = OmQuery(OmQuery::OP_FILTER, OmQuery(), bquery); // bquery will now run as a pure booleanA common requirement in search engine functionality is to run a probabilistic query where some terms are required to index all the retrieved documents (`+' terms), and others are required to index none of the retrieved documents (`-' terms). For example,
regulation import export +canned +fish -japanthe corresponding query can be set up by,
vector <om_termname> plus_terms; vector <om_termname> minus_terms; vector <om_termname> normal_terms; plus_terms.push_back("canned"); plus_terms.push_back("fish"); minus_terms.push_back("japan"); normal_terms.push_back("regulation"); normal_terms.push_back("import"); normal_terms.push_back("export"); OmQuery query(OmQuery::OP_AND_MAYBE, OmQuery(OmQuery::OP_AND, plus_terms.begin(), plus_terms.end()); OmQuery(OmQuery::OP_OR, normal_terms.begin(), normal_terms.end())); query = OmQuery(OmQuery::OP_AND_NOT, query, OmQuery(OmQuery::OP_OR, minus_terms.begin(), minus_terms.end()));
These are an added complication, although they make it possible to write much neater code, and to perform some extra types of query. (See "Specifying a pure boolean query").
Undefined queries are not empty queries, or queries which match nothing: rather, they should be thought of as placeholders. An undefined query is created by calling the default constructor for OmQuery(), and can then be used in many places in construction of a query.
Occasionally it may be desirable to perform a purely boolean query, and not to calculate weights for each document. This can be performed within Xapian as follows:
query
:
example:OmQuery query(OmQuery::OP_AND_NOT, OmQuery("cheese"), OmQuery("bread"));
OmQuery boolquery(OmQuery::OP_FILTER, OmQuery(), query);
The OmEnquire class does not require that a method be called in order to perform the query. Rather, you simply ask for the results of a query, and it will perform whatever calculations are necessary to provide the answer:
OmMSet OmEnquire::get_mset(om_doccount first, om_doccount maxitems, const OmRSet * omrset = 0, const OmMatchOptions * moptions = 0, const OmMatchDecider * mdecider = 0) const
When asking for the results, you must specify (in first
) the
first item in the result set to return, where the numbering starts at zero
(so a value of
zero corresponds to the first item returned being that with the highest
score, and a value of 10 corresponds to the first 10 items being ignored,
and the returned items starting at the eleventh).
You must also specify (in maxitems
) the the maximum number of
items to return. Unless there are not enough matching items, precisely
this number of items will be returned.
If maxitems
is zero, no items will be returned, but the usual
statistics (such as the maximum possible weight which a document could be
assigned by the query) will be calculated. (See "The OmMSet" below).
Query results are returned in an
OmMSet
object. The prime
field in this is items
, which is a list of
OmMSetItem
's
comprising the
selected part of the match results. This list is in descending sorted order
of relevance (so the most relevant document is first in the list).
Each OmMSet
item contains a document id, and the weight
calculated for this document.
An OmMSet
also contains various information about the search
result:
firstitem
|
The index of the first item in the result which was put into the mset.
(Corresponding to first in
OmEnquire::get_mset() )
|
max_attained
| The greatest weight which is attained in the full results of the search. |
max_possible
| The maximum possible weight in the mset. |
docs_considered
|
The number of documents matching the query considered for the mset.
This provides a lower bound on the number of documents in the database
which have a weight greater than zero. Note that this value may change
if the search is recalculated with different values for first or
max_items |
See the automatically extracted documentation for more details of these fields.
The OmMSet
also provides methods for converting the score
calculated for a given document into a percentage value, suitable for
displaying to a user. This may be done using the
convert_to_percent()
methods:
int OmMSet::convert_to_percent(const OmMSetItem & item) const int OmMSet::convert_to_percent(om_weight wt) constThese methods return a value in the range 0 to 100, which will be 0 if and only if the item did not match the query at all.
Each document in the database has some data associated with it,
represented by an
OmDocument
object.
There are some arbitrary numeric keys (which are not yet available,
and mainly useful in the match process) and an arbitrary lump of
data. To get the OmDocument
object, use
OmEnquire::get_doc()
.
The returned OmDocument
is fairly cheap to copy around.
This data can be used to store a summary of the document along with a URL, for example, or anything else the application developer would like.
The data can be retrieved with
OmDocument::get_data()
from the OmDocument
object. This returns
a C++ string with the data. It can include embedded nulls
or other special characters.
Xapian supports the idea of relevance feedback: that is, of allowing the user
to mark documents as being relevant to the search, and using this information
to modify the search. This is supported by means of relevance sets, which
are simply sets of document id's which are marked as relevant. These
are held in OmRSet
objects,
one of which may optionally be supplied to Xapian in the
omrset
parameter when calling
OmEnquire::get_mset()
.
There are various additional options which may be specified when
performing the query. These are specified by passing an
OmMatchOptions
object. If no such object is passed, the default options will be
used. The options are as follows.
collapse key |
Each document in a database may have a set of numbered keys. The
contents of each key is a string of arbitrary length.
The OmMatchOptions::set_collapse_key(om_keyno key_)
method specifies a key number upon which to remove duplicates, and the
OmMatchOptions::set_no_collapse()
method specifies that no duplicate removal should be done.
Only one duplicate removal key may be specified at any time, and the
default is to perform no duplicate removal.
|
percentage cutoff |
It may occasionally be desirable to exclude any documents which have a
weight less than a given percentage value. This may be done using
OmMatchOptions::set_percentage_cutoff() .
Note that documents rarely score 100 percent, so using
set_percentage_cutoff(100) will be unlikely to return any documents.
|
sort direction |
Some weighting functions may frequently result in several documents being
returned with the same weight. In this case, by default, the documents
will be returned in ascending document id order. This can be changed
by using
OmMatchOptions::set_sort_forward()
to set the sort direction. set_sort_forward(false) may be
useful, for example, when it would be best to return the newest documents,
and new documents are being added to the end of the database.
|
Sometimes it may be useful to return only documents matching criteria
which can't be easily represented by queries. This can be done using
a match decision functor. To set such a condition, derive a class
from OmMatchDecider
and override the function operator,
operator()(const OmDocument *doc)
. The operator can make
a decision based on the document keys via OmDocument::get_key(om_keyno)
.
The functor will also have access to the document data stored in the
database (via OmDocument::get_data()
), but beware that, for
most database backends, this is an expensive operation and is likely to slow
down the search considerably.
Xapian also supports the idea of calculating terms to add to the query, based on the relevant documents supplied. A set of such terms, together with their weights, may be returned by:
OmESet OmEnquire::get_eset(om_termcount maxitems, const OmRSet & omrset, bool exclude_query_terms = true, bool use_exact_termfreq = false, double k = 1.0, const OmExpandDecider * edecider = 0) const; OmESet OmEnquire::get_eset(om_termcount maxitems, const OmRSet & omrset, const OmExpandDecider * edecider) const
As for get_mset
, up to maxitems
expand terms
will be returned, with fewer being returned if and only if no more terms
could be found.
The expand terms are returned in sorted weight order in an
OmESet
item.
By default terms which are already in the query will never be returned by
get_eset()
. If exclude_query_terms
is
false
) then query terms may be returned.
By default, Xapian uses an approximation to the term frequency when
get_eset()
is called when searching over multiple databases.
This approximation improves performance, and usually still returns good
results. If you're willing to pay the performance penalty, you can
get Xapian to calculate the exact term frequencies by passing true
for use_exact_termfreq
.
It is often useful to allow only certain classes of term to be returned
in the expand set. For example, there may be special terms in the
database with various prefixes, which should be removed from the expand
set. This is accomplished by providing a decision functor. To do this,
derive a class from OmExpandDecider
and override the
function operator, operator()(const om_termname &)
.
The functor is called with each term before it is added to the set,
and it may accept (by returning true
) or reject (by returning
false
) the term as appropriate.
There's no pthread specific code in Xapian. If you want to use the same object concurrently from different threads, it's up to you to police access (with a mutex or in some other way) to ensure only one method is being executed at once. The reason for this is to avoid adding the overhead of locking and unlocking mutexes when they aren't required. It also makes the Xapian code easier to maintain, and simplifies building it.
For most applications, this is unlikely to be an issue - generally the calls to Xapian are likely to be from a single thread. And if they aren't, you can just create an entirely separate OmDatabase object in each thread - this is no different to accessing the same database from two different processes.
Extensively documented examples of simple usage of the Xapian API for creating databases and then for searching through them are given in the QuickStart tutorial.
Further examples of usage of Xapian are available in the xapian-examples package.