sparql Package

sparql Package

SPARQL implementation for RDFLib

New in version 4.0.

rdflib.plugins.sparql.CUSTOM_EVALS = {}

Custom evaluation functions

These must be functions taking (ctx, part) and raise NotImplementedError if they cannot handle a certain part

rdflib.plugins.sparql.SPARQL_DEFAULT_GRAPH_UNION = True

If True - the default graph in the RDF Dataset is the union of all named graphs (like RDFLib’s ConjunctiveGraph)

rdflib.plugins.sparql.SPARQL_LOAD_GRAPHS = True

If True, using FROM <uri> and FROM NAMED <uri> will load/parse more data

aggregates Module

class rdflib.plugins.sparql.aggregates.Accumulator(aggregation)[source]

Bases: object

abstract base class for different aggregation functions

__dict__ = dict_proxy({'__module__': 'rdflib.plugins.sparql.aggregates', 'dont_care': <function dont_care at 0x7f0316882938>, 'use_row': <function use_row at 0x7f03168829b0>, '__dict__': <attribute '__dict__' of 'Accumulator' objects>, '__weakref__': <attribute '__weakref__' of 'Accumulator' objects>, '__doc__': 'abstract base class for different aggregation functions ', '__init__': <function __init__ at 0x7f03168828c0>, 'set_value': <function set_value at 0x7f0316882a28>})
__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

dont_care(row)[source]

skips distinct test

set_value(bindings)[source]

sets final value in bindings

use_row(row)[source]

tests distinct with set

class rdflib.plugins.sparql.aggregates.Aggregator(aggregations)[source]

Bases: object

combines different Accumulator objects

__dict__ = dict_proxy({'accumulator_classes': {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>}, 'get_bindings': <function get_bindings at 0x7f0316892758>, '__module__': 'rdflib.plugins.sparql.aggregates', '__dict__': <attribute '__dict__' of 'Aggregator' objects>, '__weakref__': <attribute '__weakref__' of 'Aggregator' objects>, 'update': <function update at 0x7f03168926e0>, '__init__': <function __init__ at 0x7f0316892668>, '__doc__': 'combines different Accumulator objects'})
__init__(aggregations)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

accumulator_classes = {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>}
get_bindings()[source]

calculate and set last values

update(row)[source]

update all own accumulators

class rdflib.plugins.sparql.aggregates.Average(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Counter(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
eval_full_row(row)[source]
eval_row(row)[source]
get_value()[source]
update(row, aggregator)[source]
use_row(row)[source]
class rdflib.plugins.sparql.aggregates.Extremum(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

abstract base class for Minimum and Maximum

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
set_value(bindings)[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.GroupConcat(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Maximum(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Extremum

__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
class rdflib.plugins.sparql.aggregates.Minimum(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Extremum

__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
class rdflib.plugins.sparql.aggregates.Sample(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

takes the first eligable value

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Sum(aggregation)[source]

Bases: rdflib.plugins.sparql.aggregates.Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
rdflib.plugins.sparql.aggregates.type_safe_numbers(*args)[source]

algebra Module

Converting the ‘parse-tree’ output of pyparsing to a SPARQL Algebra expression

http://www.w3.org/TR/sparql11-query/#sparqlQuery

rdflib.plugins.sparql.algebra.BGP(triples=None)[source]
rdflib.plugins.sparql.algebra.Extend(p, expr, var)[source]
rdflib.plugins.sparql.algebra.Filter(expr, p)[source]
rdflib.plugins.sparql.algebra.Graph(term, graph)[source]
rdflib.plugins.sparql.algebra.Group(p, expr=None)[source]
rdflib.plugins.sparql.algebra.Join(p1, p2)[source]
rdflib.plugins.sparql.algebra.LeftJoin(p1, p2, expr)[source]
rdflib.plugins.sparql.algebra.Minus(p1, p2)[source]
rdflib.plugins.sparql.algebra.OrderBy(p, expr)[source]
rdflib.plugins.sparql.algebra.Project(p, PV)[source]
exception rdflib.plugins.sparql.algebra.StopTraversal(rv)[source]

Bases: exceptions.Exception

__init__(rv)[source]
__module__ = 'rdflib.plugins.sparql.algebra'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.algebra.ToMultiSet(p)[source]
rdflib.plugins.sparql.algebra.Union(p1, p2)[source]
rdflib.plugins.sparql.algebra.Values(res)[source]
rdflib.plugins.sparql.algebra.analyse(n, children)[source]

Some things can be lazily joined. This propegates whether they can up the tree and sets lazy flags for all joins

rdflib.plugins.sparql.algebra.collectAndRemoveFilters(parts)[source]

FILTER expressions apply to the whole group graph pattern in which they appear.

http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

rdflib.plugins.sparql.algebra.pprintAlgebra(q)[source]
rdflib.plugins.sparql.algebra.reorderTriples(l)[source]

Reorder triple patterns so that we execute the ones with most bindings first

rdflib.plugins.sparql.algebra.simplify(n)[source]

Remove joins to empty BGPs

rdflib.plugins.sparql.algebra.translate(q)[source]

http://www.w3.org/TR/sparql11-query/#convertSolMod

rdflib.plugins.sparql.algebra.translateAggregates(q, M)[source]
rdflib.plugins.sparql.algebra.translateExists(e)[source]

Translate the graph pattern used by EXISTS and NOT EXISTS http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

rdflib.plugins.sparql.algebra.translateGraphGraphPattern(graphPattern)[source]
rdflib.plugins.sparql.algebra.translateGroupGraphPattern(graphPattern)[source]

http://www.w3.org/TR/sparql11-query/#convertGraphPattern

rdflib.plugins.sparql.algebra.translateGroupOrUnionGraphPattern(graphPattern)[source]
rdflib.plugins.sparql.algebra.translateInlineData(graphPattern)[source]
rdflib.plugins.sparql.algebra.translatePName(p, prologue)[source]

Expand prefixed/relative URIs

rdflib.plugins.sparql.algebra.translatePath(p)[source]

Translate PropertyPath expressions

rdflib.plugins.sparql.algebra.translatePrologue(p, base, initNs=None, prologue=None)[source]
rdflib.plugins.sparql.algebra.translateQuads(quads)[source]
rdflib.plugins.sparql.algebra.translateQuery(q, base=None, initNs=None)[source]

Translate a query-parsetree to a SPARQL Algebra Expression

Return a rdflib.plugins.sparql.sparql.Query object

rdflib.plugins.sparql.algebra.translateUpdate(q, base=None, initNs=None)[source]

Returns a list of SPARQL Update Algebra expressions

rdflib.plugins.sparql.algebra.translateUpdate1(u, prologue)[source]
rdflib.plugins.sparql.algebra.translateValues(v)[source]
rdflib.plugins.sparql.algebra.traverse(tree, visitPre=<function <lambda>>, visitPost=<function <lambda>>, complete=None)[source]

Traverse tree, visit each node with visit function visit function may raise StopTraversal to stop traversal if complete!=None, it is returned on complete traversal, otherwise the transformed tree is returned

rdflib.plugins.sparql.algebra.triples(l)[source]

compat Module

Function/methods to help supporting 2.5-2.7

datatypes Module

Utility functions for supporting the XML Schema Datatypes hierarchy

rdflib.plugins.sparql.datatypes.type_promotion(t1, t2)[source]

evaluate Module

These method recursively evaluate the SPARQL Algebra

evalQuery is the entry-point, it will setup context and return the SPARQLResult object

evalPart is called on each level and will delegate to the right method

A rdflib.plugins.sparql.sparql.QueryContext is passed along, keeping information needed for evaluation

A list of dicts (solution mappings) is returned, apart from GroupBy which may also return a dict of list of dicts

rdflib.plugins.sparql.evaluate.evalAggregateJoin(ctx, agg)[source]
rdflib.plugins.sparql.evaluate.evalAskQuery(ctx, query)[source]
rdflib.plugins.sparql.evaluate.evalBGP(ctx, bgp)[source]

A basic graph pattern

rdflib.plugins.sparql.evaluate.evalConstructQuery(ctx, query)[source]
rdflib.plugins.sparql.evaluate.evalDistinct(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalExtend(ctx, extend)[source]
rdflib.plugins.sparql.evaluate.evalFilter(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalGraph(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalGroup(ctx, group)[source]

http://www.w3.org/TR/sparql11-query/#defn_algGroup

rdflib.plugins.sparql.evaluate.evalJoin(ctx, join)[source]
rdflib.plugins.sparql.evaluate.evalLazyJoin(ctx, join)[source]

A lazy join will push the variables bound in the first part to the second part, essentially doing the join implicitly hopefully evaluating much fewer triples

rdflib.plugins.sparql.evaluate.evalLeftJoin(ctx, join)[source]
rdflib.plugins.sparql.evaluate.evalMinus(ctx, minus)[source]
rdflib.plugins.sparql.evaluate.evalMultiset(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalOrderBy(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalPart(ctx, part)[source]
rdflib.plugins.sparql.evaluate.evalProject(ctx, project)[source]
rdflib.plugins.sparql.evaluate.evalQuery(graph, query, initBindings, base=None)[source]
rdflib.plugins.sparql.evaluate.evalReduced(ctx, part)[source]

apply REDUCED to result

REDUCED is not as strict as DISTINCT, but if the incoming rows were sorted it should produce the same result with limited extra memory and time per incoming row.

rdflib.plugins.sparql.evaluate.evalSelectQuery(ctx, query)[source]
rdflib.plugins.sparql.evaluate.evalSlice(ctx, slice)[source]
rdflib.plugins.sparql.evaluate.evalUnion(ctx, union)[source]
rdflib.plugins.sparql.evaluate.evalValues(ctx, part)[source]

evalutils Module

operators Module

This contains evaluation functions for expressions

They get bound as instances-methods to the CompValue objects from parserutils using setEvalFn

rdflib.plugins.sparql.operators.AdditiveExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_ABS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-abs

rdflib.plugins.sparql.operators.Builtin_BNODE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bnode

rdflib.plugins.sparql.operators.Builtin_BOUND(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bound

rdflib.plugins.sparql.operators.Builtin_CEIL(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-ceil

rdflib.plugins.sparql.operators.Builtin_COALESCE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-coalesce

rdflib.plugins.sparql.operators.Builtin_CONCAT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-concat

rdflib.plugins.sparql.operators.Builtin_CONTAINS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strcontains

rdflib.plugins.sparql.operators.Builtin_DATATYPE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_DAY(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_ENCODE_FOR_URI(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_EXISTS(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_FLOOR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-floor

rdflib.plugins.sparql.operators.Builtin_HOURS(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_IF(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-if

rdflib.plugins.sparql.operators.Builtin_IRI(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-iri

rdflib.plugins.sparql.operators.Builtin_LANG(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-lang

Returns the language tag of ltrl, if it has one. It returns “” if ltrl has no language tag. Note that the RDF data model does not include literals with an empty language tag.

rdflib.plugins.sparql.operators.Builtin_LANGMATCHES(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-langMatches

rdflib.plugins.sparql.operators.Builtin_LCASE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MD5(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MINUTES(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MONTH(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_NOW(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-now

rdflib.plugins.sparql.operators.Builtin_RAND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#idp2133952

rdflib.plugins.sparql.operators.Builtin_REGEX(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-regex Invokes the XPath fn:matches function to match text against a regular expression pattern. The regular expression language is defined in XQuery 1.0 and XPath 2.0 Functions and Operators section 7.6.1 Regular Expression Syntax

rdflib.plugins.sparql.operators.Builtin_REPLACE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

rdflib.plugins.sparql.operators.Builtin_ROUND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-round

rdflib.plugins.sparql.operators.Builtin_SECONDS(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-seconds

rdflib.plugins.sparql.operators.Builtin_SHA1(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA256(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA384(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA512(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STR(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STRAFTER(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strafter

rdflib.plugins.sparql.operators.Builtin_STRBEFORE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strbefore

rdflib.plugins.sparql.operators.Builtin_STRDT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_STRENDS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strends

rdflib.plugins.sparql.operators.Builtin_STRLANG(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strlang

rdflib.plugins.sparql.operators.Builtin_STRLEN(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STRSTARTS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strstarts

rdflib.plugins.sparql.operators.Builtin_STRUUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_SUBSTR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

rdflib.plugins.sparql.operators.Builtin_TIMEZONE(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-timezone

Returns:the timezone part of arg as an xsd:dayTimeDuration.
Raises:an error if there is no timezone.
rdflib.plugins.sparql.operators.Builtin_TZ(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_UCASE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_UUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_YEAR(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isBLANK(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isIRI(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isLITERAL(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isNUMERIC(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_sameTerm(e, ctx)[source]
rdflib.plugins.sparql.operators.ConditionalAndExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.ConditionalOrExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.EBV(rt)[source]
  • If the argument is a typed literal with a datatype of xsd:boolean, the EBV is the value of that argument.
  • If the argument is a plain literal or a typed literal with a datatype of xsd:string, the EBV is false if the operand value has zero length; otherwise the EBV is true.
  • If the argument is a numeric type or a typed literal with a datatype derived from a numeric type, the EBV is false if the operand value is NaN or is numerically equal to zero; otherwise the EBV is true.
  • All other arguments, including unbound arguments, produce a type error.
rdflib.plugins.sparql.operators.Function(e, ctx)[source]

Custom functions (and casts!)

rdflib.plugins.sparql.operators.MultiplicativeExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.RelationalExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.UnaryMinus(expr, ctx)[source]
rdflib.plugins.sparql.operators.UnaryNot(expr, ctx)[source]
rdflib.plugins.sparql.operators.UnaryPlus(expr, ctx)[source]
rdflib.plugins.sparql.operators.and_(*args)[source]
rdflib.plugins.sparql.operators.datetime(e)[source]
rdflib.plugins.sparql.operators.literal(s)[source]
rdflib.plugins.sparql.operators.not_(arg)[source]
rdflib.plugins.sparql.operators.numeric(expr)[source]

return a number from a literal http://www.w3.org/TR/xpath20/#promotion

or TypeError

rdflib.plugins.sparql.operators.simplify(expr)[source]
rdflib.plugins.sparql.operators.string(s)[source]

Make sure the passed thing is a string literal i.e. plain literal, xsd:string literal or lang-tagged literal

parser Module

SPARQL 1.1 Parser

based on pyparsing

rdflib.plugins.sparql.parser.expandBNodeTriples(terms)[source]

expand [ ?p ?o ] syntax for implicit bnodes

rdflib.plugins.sparql.parser.expandCollection(terms)[source]

expand ( 1 2 3 ) notation for collections

rdflib.plugins.sparql.parser.expandTriples(terms)[source]

Expand ; and , syntax for repeat predicates, subjects

rdflib.plugins.sparql.parser.expandUnicodeEscapes(q)[source]

The syntax of the SPARQL Query Language is expressed over code points in Unicode [UNICODE]. The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an uXXXX (U+0 to U+FFFF) or UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]

rdflib.plugins.sparql.parser.neg(literal)[source]
rdflib.plugins.sparql.parser.parseQuery(q)[source]
rdflib.plugins.sparql.parser.parseUpdate(q)[source]
rdflib.plugins.sparql.parser.setDataType(terms)[source]
rdflib.plugins.sparql.parser.setLanguage(terms)[source]

parserutils Module

class rdflib.plugins.sparql.parserutils.Comp(name, expr)[source]

Bases: pyparsing.TokenConverter

A pyparsing token for grouping together things with a label Any sub-tokens that are not Params will be ignored.

Returns CompValue / Expr objects - depending on whether evalFn is set.

__init__(name, expr)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse(instring, loc, tokenList)[source]
setEvalFn(evalfn)[source]
class rdflib.plugins.sparql.parserutils.CompValue(name, **values)[source]

Bases: collections.OrderedDict

The result of parsing a Comp Any included Params are avaiable as Dict keys or as attributes

__getattr__(a)[source]
__getitem__(a)[source]
__init__(name, **values)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__repr__()[source]
__str__()[source]
clone()[source]
get(a, variables=False, errors=False)[source]
class rdflib.plugins.sparql.parserutils.Expr(name, evalfn=None, **values)[source]

Bases: rdflib.plugins.sparql.parserutils.CompValue

A CompValue that is evaluatable

__init__(name, evalfn=None, **values)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
eval(ctx={})[source]
class rdflib.plugins.sparql.parserutils.Param(name, expr, isList=False)[source]

Bases: pyparsing.TokenConverter

A pyparsing token for labelling a part of the parse-tree if isList is true repeat occurrences of ParamList have their values merged in a list

__init__(name, expr, isList=False)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse2(tokenList)[source]
class rdflib.plugins.sparql.parserutils.ParamList(name, expr)[source]

Bases: rdflib.plugins.sparql.parserutils.Param

A shortcut for a Param with isList=True

__init__(name, expr)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
class rdflib.plugins.sparql.parserutils.ParamValue(name, tokenList, isList)[source]

Bases: object

The result of parsing a Param This just keeps the name/value All cleverness is in the CompValue

__init__(name, tokenList, isList)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__str__()[source]
class rdflib.plugins.sparql.parserutils.plist[source]

Bases: list

this is just a list, but we want our own type to check for

__module__ = 'rdflib.plugins.sparql.parserutils'
rdflib.plugins.sparql.parserutils.prettify_parsetree(t, indent='', depth=0)[source]
rdflib.plugins.sparql.parserutils.value(ctx, val, variables=False, errors=False)[source]

utility function for evaluating something...

Variables will be looked up in the context Normally, non-bound vars is an error, set variables=True to return unbound vars

Normally, an error raises the error, set errors=True to return error

processor Module

Code for tying SPARQL Engine into RDFLib

These should be automatically registered with RDFLib

class rdflib.plugins.sparql.processor.SPARQLProcessor(graph)[source]

Bases: rdflib.query.Processor

__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
query(strOrQuery, initBindings={}, initNs={}, base=None, DEBUG=False)[source]

Evaluate a query with the given initial bindings, and initial namespaces. The given base is used to resolve relative URIs in the query and will be overridden by any BASE given in the query.

class rdflib.plugins.sparql.processor.SPARQLResult(res)[source]

Bases: rdflib.query.Result

__init__(res)[source]
__module__ = 'rdflib.plugins.sparql.processor'
class rdflib.plugins.sparql.processor.SPARQLUpdateProcessor(graph)[source]

Bases: rdflib.query.UpdateProcessor

__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
update(strOrQuery, initBindings={}, initNs={})[source]
rdflib.plugins.sparql.processor.prepareQuery(queryString, initNs={}, base=None)[source]

Parse and translate a SPARQL Query

rdflib.plugins.sparql.processor.processUpdate(graph, updateString, initBindings={}, initNs={}, base=None)[source]

Process a SPARQL Update Request returns Nothing on success or raises Exceptions on error

sparql Module

exception rdflib.plugins.sparql.sparql.AlreadyBound[source]

Bases: rdflib.plugins.sparql.sparql.SPARQLError

Raised when trying to bind a variable that is already bound!

__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Bindings(outer=None, d=[])[source]

Bases: _abcoll.MutableMapping

A single level of a stack of variable-value bindings. Each dict keeps a reference to the dict below it, any failed lookup is propegated back

In python 3.3 this could be a collections.ChainMap

__abstractmethods__ = frozenset([])
__contains__(key)[source]
__delitem__(key)[source]
__getitem__(key)[source]
__init__(outer=None, d=[])[source]
__iter__()[source]
__len__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]
__setitem__(key, value)[source]
__str__()[source]
class rdflib.plugins.sparql.sparql.FrozenBindings(ctx, *args, **kwargs)[source]

Bases: rdflib.plugins.sparql.sparql.FrozenDict

__abstractmethods__ = frozenset([])
__getitem__(key)[source]
__init__(ctx, *args, **kwargs)[source]
__module__ = 'rdflib.plugins.sparql.sparql'
bnodes
forget(before, _except=None)[source]

return a frozen dict only of bindings made in self since before

merge(other)[source]
now
project(vars)[source]
prologue
remember(these)[source]

return a frozen dict only of bindings in these

class rdflib.plugins.sparql.sparql.FrozenDict(*args, **kwargs)[source]

Bases: _abcoll.Mapping

An immutable hashable dict

Taken from http://stackoverflow.com/a/2704866/81121

__abstractmethods__ = frozenset([])
__getitem__(key)[source]
__hash__()[source]
__init__(*args, **kwargs)[source]
__iter__()[source]
__len__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]
__str__()[source]
compatible(other)[source]
disjointDomain(other)[source]
merge(other)[source]
project(vars)[source]
exception rdflib.plugins.sparql.sparql.NotBoundError(msg=None)[source]

Bases: rdflib.plugins.sparql.sparql.SPARQLError

__init__(msg=None)[source]
__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Prologue[source]

Bases: object

A class for holding prefixing bindings and base URI information

__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
absolutize(iri)[source]

Apply BASE / PREFIXes to URIs (and to datatypes in Literals)

TODO: Move resolving URIs to pre-processing

bind(prefix, uri)[source]
resolvePName(prefix, localname)[source]
class rdflib.plugins.sparql.sparql.Query(prologue, algebra)[source]

Bases: object

A parsed and translated query

__init__(prologue, algebra)[source]
__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.QueryContext(graph=None, bindings=None, initBindings=None)[source]

Bases: object

Query context - passed along when evaluating the query

__getitem__(key)[source]
__init__(graph=None, bindings=None, initBindings=None)[source]
__module__ = 'rdflib.plugins.sparql.sparql'
__setitem__(key, value)[source]
clean()[source]
clone(bindings=None)[source]
dataset

current dataset

get(key, default=None)[source]
load(source, default=False, **kwargs)[source]
push()[source]
pushGraph(graph)[source]
solution(vars=None)[source]

Return a static copy of the current variable bindings as dict

thaw(frozenbindings)[source]

Create a new read/write query context from the given solution

exception rdflib.plugins.sparql.sparql.SPARQLError(msg=None)[source]

Bases: exceptions.Exception

__init__(msg=None)[source]
__module__ = 'rdflib.plugins.sparql.sparql'
exception rdflib.plugins.sparql.sparql.SPARQLTypeError(msg)[source]

Bases: rdflib.plugins.sparql.sparql.SPARQLError

__init__(msg)[source]
__module__ = 'rdflib.plugins.sparql.sparql'

update Module

Code for carrying out Update Operations

rdflib.plugins.sparql.update.evalAdd(ctx, u)[source]

add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#add

rdflib.plugins.sparql.update.evalClear(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#clear

rdflib.plugins.sparql.update.evalCopy(ctx, u)[source]

remove all triples from dst add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#copy

rdflib.plugins.sparql.update.evalCreate(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#create

rdflib.plugins.sparql.update.evalDeleteData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteData

rdflib.plugins.sparql.update.evalDeleteWhere(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteWhere

rdflib.plugins.sparql.update.evalDrop(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#drop

rdflib.plugins.sparql.update.evalInsertData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#insertData

rdflib.plugins.sparql.update.evalLoad(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#load

rdflib.plugins.sparql.update.evalModify(ctx, u)[source]
rdflib.plugins.sparql.update.evalMove(ctx, u)[source]

remove all triples from dst add all triples from src to dst remove all triples from src

http://www.w3.org/TR/sparql11-update/#move

rdflib.plugins.sparql.update.evalUpdate(graph, update, initBindings={})[source]

http://www.w3.org/TR/sparql11-update/#updateLanguage

‘A request is a sequence of operations [...] Implementations MUST ensure that operations of a single request are executed in a fashion that guarantees the same effects as executing them in lexical order.

Operations all result either in success or failure.

If multiple operations are present in a single request, then a result of failure from any operation MUST abort the sequence of operations, causing the subsequent operations to be ignored.’

This will return None on success and raise Exceptions on error