rdflib.plugins.sparql package

Subpackages

Submodules

rdflib.plugins.sparql.aggregates module

class rdflib.plugins.sparql.aggregates.Accumulator(aggregation)[source]

Bases: object

abstract base class for different aggregation functions

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'abstract base class for different aggregation functions', '__init__': <function Accumulator.__init__>, 'dont_care': <function Accumulator.dont_care>, 'use_row': <function Accumulator.use_row>, 'set_value': <function Accumulator.set_value>, '__dict__': <attribute '__dict__' of 'Accumulator' objects>, '__weakref__': <attribute '__weakref__' of 'Accumulator' objects>, '__annotations__': {}})
__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

dont_care(row)[source]

skips distinct test

set_value(bindings)[source]

sets final value in bindings

use_row(row)[source]

tests distinct with set

class rdflib.plugins.sparql.aggregates.Aggregator(aggregations)[source]

Bases: object

combines different Accumulator objects

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'combines different Accumulator objects', 'accumulator_classes': {'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>}, '__init__': <function Aggregator.__init__>, 'update': <function Aggregator.update>, 'get_bindings': <function Aggregator.get_bindings>, '__dict__': <attribute '__dict__' of 'Aggregator' objects>, '__weakref__': <attribute '__weakref__' of 'Aggregator' objects>, '__annotations__': {}})
__init__(aggregations)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

accumulator_classes = {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>}
get_bindings()[source]

calculate and set last values

update(row)[source]

update all own accumulators

class rdflib.plugins.sparql.aggregates.Average(aggregation)[source]

Bases: Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Counter(aggregation)[source]

Bases: Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
eval_full_row(row)[source]
eval_row(row)[source]
get_value()[source]
update(row, aggregator)[source]
use_row(row)[source]

tests distinct with set

class rdflib.plugins.sparql.aggregates.Extremum(aggregation)[source]

Bases: Accumulator

abstract base class for Minimum and Maximum

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
set_value(bindings)[source]

sets final value in bindings

update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.GroupConcat(aggregation)[source]

Bases: Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Maximum(aggregation)[source]

Bases: Extremum

__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
class rdflib.plugins.sparql.aggregates.Minimum(aggregation)[source]

Bases: Extremum

__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
class rdflib.plugins.sparql.aggregates.Sample(aggregation)[source]

Bases: Accumulator

takes the first eligible value

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
class rdflib.plugins.sparql.aggregates.Sum(aggregation)[source]

Bases: Accumulator

__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
update(row, aggregator)[source]
rdflib.plugins.sparql.aggregates.type_safe_numbers(*args)[source]

rdflib.plugins.sparql.algebra module

Converting the ‘parse-tree’ output of pyparsing to a SPARQL Algebra expression

http://www.w3.org/TR/sparql11-query/#sparqlQuery

rdflib.plugins.sparql.algebra.BGP(triples=None)[source]
Return type:

CompValue

exception rdflib.plugins.sparql.algebra.ExpressionNotCoveredException[source]

Bases: Exception

__module__ = 'rdflib.plugins.sparql.algebra'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.algebra.Extend(p, expr, var)[source]
Parameters:

p (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.Filter(expr, p)[source]
Parameters:

p (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.Graph(term, graph)[source]
Return type:

CompValue

rdflib.plugins.sparql.algebra.Group(p, expr=None)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Join(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.LeftJoin(p1, p2, expr)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Minus(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.OrderBy(p, expr)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Project(p, PV)[source]
Parameters:

p (CompValue) –

Return type:

CompValue

exception rdflib.plugins.sparql.algebra.StopTraversal(rv)[source]

Bases: Exception

__init__(rv)[source]
__module__ = 'rdflib.plugins.sparql.algebra'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.algebra.ToMultiSet(p)[source]
Parameters:

p (Union[List[Dict[Variable, Identifier]], CompValue]) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.Union(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Values(res)[source]
Return type:

CompValue

rdflib.plugins.sparql.algebra.analyse(n, children)[source]

Some things can be lazily joined. This propegates whether they can up the tree and sets lazy flags for all joins

rdflib.plugins.sparql.algebra.collectAndRemoveFilters(parts)[source]

FILTER expressions apply to the whole group graph pattern in which they appear.

http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

rdflib.plugins.sparql.algebra.pprintAlgebra(q)[source]
rdflib.plugins.sparql.algebra.reorderTriples(l_)[source]

Reorder triple patterns so that we execute the ones with most bindings first

Parameters:

l_ (Iterable[Tuple[Identifier, Identifier, Identifier]]) –

Return type:

List[Tuple[Identifier, Identifier, Identifier]]

rdflib.plugins.sparql.algebra.simplify(n)[source]

Remove joins to empty BGPs

Return type:

Optional[CompValue]

rdflib.plugins.sparql.algebra.translate(q)[source]

http://www.w3.org/TR/sparql11-query/#convertSolMod

Parameters:

q (CompValue) –

Return type:

Tuple[CompValue, List[Variable]]

rdflib.plugins.sparql.algebra.translateAggregates(q, M)[source]
Parameters:
Return type:

Tuple[CompValue, List[Tuple[Variable, Variable]]]

rdflib.plugins.sparql.algebra.translateAlgebra(query_algebra)[source]
Parameters:

query_algebra (Query) – An algebra returned by the function call algebra.translateQuery(parse_tree).

Return type:

str

Returns:

The query form generated from the SPARQL 1.1 algebra tree for select queries.

rdflib.plugins.sparql.algebra.translateExists(e)[source]

Translate the graph pattern used by EXISTS and NOT EXISTS http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

Parameters:

e (Union[Expr, Literal, Variable]) –

Return type:

Union[Expr, Literal, Variable]

rdflib.plugins.sparql.algebra.translateGraphGraphPattern(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translateGroupGraphPattern(graphPattern)[source]

http://www.w3.org/TR/sparql11-query/#convertGraphPattern

Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translateGroupOrUnionGraphPattern(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

Optional[CompValue]

rdflib.plugins.sparql.algebra.translateInlineData(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translatePName(p, prologue)[source]

Expand prefixed/relative URIs

Parameters:
rdflib.plugins.sparql.algebra.translatePath(p: URIRef) None[source]
rdflib.plugins.sparql.algebra.translatePath(p: CompValue) Path

Translate PropertyPath expressions

Parameters:

p (Union[CompValue, URIRef]) –

Return type:

Optional[Path]

rdflib.plugins.sparql.algebra.translatePrologue(p, base, initNs=None, prologue=None)[source]
Parameters:
Return type:

Prologue

rdflib.plugins.sparql.algebra.translateQuads(quads)[source]
Parameters:

quads (CompValue) –

rdflib.plugins.sparql.algebra.translateQuery(q, base=None, initNs=None)[source]

Translate a query-parsetree to a SPARQL Algebra Expression

Return a rdflib.plugins.sparql.sparql.Query object

Parameters:
Return type:

Query

rdflib.plugins.sparql.algebra.translateUpdate(q, base=None, initNs=None)[source]

Returns a list of SPARQL Update Algebra expressions

Parameters:
Return type:

Update

rdflib.plugins.sparql.algebra.translateUpdate1(u, prologue)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.translateValues(v)[source]
Parameters:

v (CompValue) –

Return type:

Union[List[Dict[Variable, Identifier]], CompValue]

rdflib.plugins.sparql.algebra.traverse(tree, visitPre=<function <lambda>>, visitPost=<function <lambda>>, complete=None)[source]

Traverse tree, visit each node with visit function visit function may raise StopTraversal to stop traversal if complete!=None, it is returned on complete traversal, otherwise the transformed tree is returned

Parameters:
rdflib.plugins.sparql.algebra.triples(l)[source]
Parameters:

l (Union[List[List[Identifier]], List[Tuple[Identifier, Identifier, Identifier]]]) –

Return type:

List[Tuple[Identifier, Identifier, Identifier]]

rdflib.plugins.sparql.datatypes module

Utility functions for supporting the XML Schema Datatypes hierarchy

rdflib.plugins.sparql.datatypes.type_promotion(t1, t2)[source]

rdflib.plugins.sparql.evaluate module

These method recursively evaluate the SPARQL Algebra

evalQuery is the entry-point, it will setup context and return the SPARQLResult object

evalPart is called on each level and will delegate to the right method

A rdflib.plugins.sparql.sparql.QueryContext is passed along, keeping information needed for evaluation

A list of dicts (solution mappings) is returned, apart from GroupBy which may also return a dict of list of dicts

rdflib.plugins.sparql.evaluate.evalAggregateJoin(ctx, agg)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalAskQuery(ctx, query)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalBGP(ctx, bgp)[source]

A basic graph pattern

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalConstructQuery(ctx, query)[source]
Parameters:

ctx (QueryContext) –

Return type:

Dict[str, Union[str, Graph]]

rdflib.plugins.sparql.evaluate.evalDistinct(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalExtend(ctx, extend)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalFilter(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalGraph(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalGroup(ctx, group)[source]

http://www.w3.org/TR/sparql11-query/#defn_algGroup

Parameters:
rdflib.plugins.sparql.evaluate.evalJoin(ctx, join)[source]
Parameters:
Return type:

Generator[FrozenDict, None, None]

rdflib.plugins.sparql.evaluate.evalLazyJoin(ctx, join)[source]

A lazy join will push the variables bound in the first part to the second part, essentially doing the join implicitly hopefully evaluating much fewer triples

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalLeftJoin(ctx, join)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalMinus(ctx, minus)[source]
Parameters:
Return type:

Generator[FrozenDict, None, None]

rdflib.plugins.sparql.evaluate.evalMultiset(ctx, part)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalOrderBy(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalPart(ctx, part)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalProject(ctx, project)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalQuery(graph, query, initBindings, base=None)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalReduced(ctx, part)[source]

apply REDUCED to result

REDUCED is not as strict as DISTINCT, but if the incoming rows were sorted it should produce the same result with limited extra memory and time per incoming row.

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalSelectQuery(ctx, query)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalServiceQuery(ctx, part)[source]
Parameters:

ctx (QueryContext) –

rdflib.plugins.sparql.evaluate.evalSlice(ctx, slice)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalUnion(ctx, union)[source]
Parameters:
Return type:

Iterable[FrozenBindings]

rdflib.plugins.sparql.evaluate.evalValues(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evalutils module

rdflib.plugins.sparql.operators module

This contains evaluation functions for expressions

They get bound as instances-methods to the CompValue objects from parserutils using setEvalFn

rdflib.plugins.sparql.operators.AdditiveExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_ABS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-abs

rdflib.plugins.sparql.operators.Builtin_BNODE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bnode

rdflib.plugins.sparql.operators.Builtin_BOUND(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bound

rdflib.plugins.sparql.operators.Builtin_CEIL(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-ceil

rdflib.plugins.sparql.operators.Builtin_COALESCE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-coalesce

rdflib.plugins.sparql.operators.Builtin_CONCAT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-concat

rdflib.plugins.sparql.operators.Builtin_CONTAINS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strcontains

rdflib.plugins.sparql.operators.Builtin_DATATYPE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_DAY(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_ENCODE_FOR_URI(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_EXISTS(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_FLOOR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-floor

rdflib.plugins.sparql.operators.Builtin_HOURS(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_IF(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-if

rdflib.plugins.sparql.operators.Builtin_IRI(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-iri

rdflib.plugins.sparql.operators.Builtin_LANG(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-lang

Returns the language tag of ltrl, if it has one. It returns “” if ltrl has no language tag. Note that the RDF data model does not include literals with an empty language tag.

rdflib.plugins.sparql.operators.Builtin_LANGMATCHES(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-langMatches

rdflib.plugins.sparql.operators.Builtin_LCASE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MD5(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MINUTES(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_MONTH(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_NOW(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-now

rdflib.plugins.sparql.operators.Builtin_RAND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#idp2133952

rdflib.plugins.sparql.operators.Builtin_REGEX(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-regex Invokes the XPath fn:matches function to match text against a regular expression pattern. The regular expression language is defined in XQuery 1.0 and XPath 2.0 Functions and Operators section 7.6.1 Regular Expression Syntax

rdflib.plugins.sparql.operators.Builtin_REPLACE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

rdflib.plugins.sparql.operators.Builtin_ROUND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-round

rdflib.plugins.sparql.operators.Builtin_SECONDS(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-seconds

rdflib.plugins.sparql.operators.Builtin_SHA1(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA256(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA384(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_SHA512(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STR(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STRAFTER(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strafter

rdflib.plugins.sparql.operators.Builtin_STRBEFORE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strbefore

rdflib.plugins.sparql.operators.Builtin_STRDT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_STRENDS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strends

rdflib.plugins.sparql.operators.Builtin_STRLANG(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strlang

rdflib.plugins.sparql.operators.Builtin_STRLEN(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_STRSTARTS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strstarts

rdflib.plugins.sparql.operators.Builtin_STRUUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_SUBSTR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

rdflib.plugins.sparql.operators.Builtin_TIMEZONE(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-timezone

Returns:

the timezone part of arg as an xsd:dayTimeDuration.

Raises:

an error if there is no timezone.

rdflib.plugins.sparql.operators.Builtin_TZ(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_UCASE(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_UUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

rdflib.plugins.sparql.operators.Builtin_YEAR(e, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isBLANK(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isIRI(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isLITERAL(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_isNUMERIC(expr, ctx)[source]
rdflib.plugins.sparql.operators.Builtin_sameTerm(e, ctx)[source]
rdflib.plugins.sparql.operators.ConditionalAndExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.ConditionalOrExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.EBV(rt)[source]

Effective Boolean Value (EBV)

  • If the argument is a typed literal with a datatype of xsd:boolean, the EBV is the value of that argument.

  • If the argument is a plain literal or a typed literal with a datatype of xsd:string, the EBV is false if the operand value has zero length; otherwise the EBV is true.

  • If the argument is a numeric type or a typed literal with a datatype derived from a numeric type, the EBV is false if the operand value is NaN or is numerically equal to zero; otherwise the EBV is true.

  • All other arguments, including unbound arguments, produce a type error.

rdflib.plugins.sparql.operators.Function(e, ctx)[source]

Custom functions and casts

rdflib.plugins.sparql.operators.MultiplicativeExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.RelationalExpression(e, ctx)[source]
rdflib.plugins.sparql.operators.UnaryMinus(expr, ctx)[source]
rdflib.plugins.sparql.operators.UnaryNot(expr, ctx)[source]
rdflib.plugins.sparql.operators.UnaryPlus(expr, ctx)[source]
rdflib.plugins.sparql.operators.and_(*args)[source]
rdflib.plugins.sparql.operators.calculateDuration(obj1, obj2)[source]

returns the duration Literal between two datetime

rdflib.plugins.sparql.operators.calculateFinalDateTime(obj1, dt1, obj2, dt2, operation)[source]

Calculates the final dateTime/date/time resultant after addition/ subtraction of duration/dayTimeDuration/yearMonthDuration

rdflib.plugins.sparql.operators.custom_function(uri, override=False, raw=False)[source]

Decorator version of register_custom_function().

rdflib.plugins.sparql.operators.date(e)[source]
Return type:

date

rdflib.plugins.sparql.operators.dateTimeObjects(expr)[source]

return a dataTime/date/time/duration/dayTimeDuration/yearMonthDuration python objects from a literal

rdflib.plugins.sparql.operators.datetime(e)[source]
rdflib.plugins.sparql.operators.default_cast(e, ctx)[source]
rdflib.plugins.sparql.operators.isCompatibleDateTimeDatatype(obj1, dt1, obj2, dt2)[source]

Returns a boolean indicating if first object is compatible with operation(+/-) over second object.

rdflib.plugins.sparql.operators.literal(s)[source]
rdflib.plugins.sparql.operators.not_(arg)[source]
rdflib.plugins.sparql.operators.numeric(expr)[source]

return a number from a literal http://www.w3.org/TR/xpath20/#promotion

or TypeError

rdflib.plugins.sparql.operators.register_custom_function(uri, func, override=False, raw=False)[source]

Register a custom SPARQL function.

By default, the function will be passed the RDF terms in the argument list. If raw is True, the function will be passed an Expression and a Context.

The function must return an RDF term, or raise a SparqlError.

rdflib.plugins.sparql.operators.simplify(expr)[source]
rdflib.plugins.sparql.operators.string(s)[source]

Make sure the passed thing is a string literal i.e. plain literal, xsd:string literal or lang-tagged literal

rdflib.plugins.sparql.operators.unregister_custom_function(uri, func=None)[source]

The ‘func’ argument is included for compatibility with existing code. A previous implementation checked that the function associated with the given uri was actually ‘func’, but this is not necessary as the uri should uniquely identify the function.

rdflib.plugins.sparql.parser module

SPARQL 1.1 Parser

based on pyparsing

rdflib.plugins.sparql.parser.expandBNodeTriples(terms)[source]

expand [ ?p ?o ] syntax for implicit bnodes

rdflib.plugins.sparql.parser.expandCollection(terms)[source]

expand ( 1 2 3 ) notation for collections

rdflib.plugins.sparql.parser.expandTriples(terms)[source]

Expand ; and , syntax for repeat predicates, subjects

rdflib.plugins.sparql.parser.expandUnicodeEscapes(q)[source]

The syntax of the SPARQL Query Language is expressed over code points in Unicode [UNICODE]. The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an uXXXX (U+0 to U+FFFF) or UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]

rdflib.plugins.sparql.parser.neg(literal)[source]
rdflib.plugins.sparql.parser.parseQuery(q)[source]
rdflib.plugins.sparql.parser.parseUpdate(q)[source]
rdflib.plugins.sparql.parser.setDataType(terms)[source]
rdflib.plugins.sparql.parser.setLanguage(terms)[source]

rdflib.plugins.sparql.parserutils module

class rdflib.plugins.sparql.parserutils.Comp(name, expr)[source]

Bases: TokenConverter

A pyparsing token for grouping together things with a label Any sub-tokens that are not Params will be ignored.

Returns CompValue / Expr objects - depending on whether evalFn is set.

__abstractmethods__ = frozenset({})
__init__(name, expr)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse(instring, loc, tokenList)[source]
setEvalFn(evalfn)[source]
class rdflib.plugins.sparql.parserutils.CompValue(name, **values)[source]

Bases: OrderedDict

The result of parsing a Comp Any included Params are available as Dict keys or as attributes

Parameters:

name (str) –

__getattr__(a)[source]
Parameters:

a (str) –

Return type:

Any

__getitem__(a)[source]

x.__getitem__(y) <==> x[y]

__init__(name, **values)[source]
Parameters:

name (str) –

__module__ = 'rdflib.plugins.sparql.parserutils'
__repr__()[source]

Return repr(self).

__str__()[source]

Return str(self).

clone()[source]
get(a, variables=False, errors=False)[source]

Return the value for key if key is in the dictionary, else default.

class rdflib.plugins.sparql.parserutils.Expr(name, evalfn=None, **values)[source]

Bases: CompValue

A CompValue that is evaluatable

__init__(name, evalfn=None, **values)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
eval(ctx={})[source]
class rdflib.plugins.sparql.parserutils.Param(name, expr, isList=False)[source]

Bases: TokenConverter

A pyparsing token for labelling a part of the parse-tree if isList is true repeat occurrences of ParamList have their values merged in a list

__abstractmethods__ = frozenset({})
__init__(name, expr, isList=False)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse2(tokenList)[source]
class rdflib.plugins.sparql.parserutils.ParamList(name, expr)[source]

Bases: Param

A shortcut for a Param with isList=True

__abstractmethods__ = frozenset({})
__init__(name, expr)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
failAction: Optional[ParseFailAction]
ignoreExprs: List['ParserElement']
parseAction: List[ParseAction]
suppress_warnings_: List[Diagnostics]
class rdflib.plugins.sparql.parserutils.ParamValue(name, tokenList, isList)[source]

Bases: object

The result of parsing a Param This just keeps the name/value All cleverness is in the CompValue

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': '\n    The result of parsing a Param\n    This just keeps the name/value\n    All cleverness is in the CompValue\n    ', '__init__': <function ParamValue.__init__>, '__str__': <function ParamValue.__str__>, '__dict__': <attribute '__dict__' of 'ParamValue' objects>, '__weakref__': <attribute '__weakref__' of 'ParamValue' objects>, '__annotations__': {}})
__init__(name, tokenList, isList)[source]
__module__ = 'rdflib.plugins.sparql.parserutils'
__str__()[source]

Return str(self).

__weakref__

list of weak references to the object (if defined)

class rdflib.plugins.sparql.parserutils.plist(iterable=(), /)[source]

Bases: list

this is just a list, but we want our own type to check for

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': 'this is just a list, but we want our own type to check for', '__dict__': <attribute '__dict__' of 'plist' objects>, '__weakref__': <attribute '__weakref__' of 'plist' objects>, '__annotations__': {}})
__module__ = 'rdflib.plugins.sparql.parserutils'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.parserutils.prettify_parsetree(t, indent='', depth=0)[source]
rdflib.plugins.sparql.parserutils.value(ctx, val, variables=False, errors=False)[source]

utility function for evaluating something…

Variables will be looked up in the context Normally, non-bound vars is an error, set variables=True to return unbound vars

Normally, an error raises the error, set errors=True to return error

Parameters:

rdflib.plugins.sparql.processor module

Code for tying SPARQL Engine into RDFLib

These should be automatically registered with RDFLib

class rdflib.plugins.sparql.processor.SPARQLProcessor(graph)[source]

Bases: Processor

__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
query(strOrQuery, initBindings={}, initNs={}, base=None, DEBUG=False)[source]

Evaluate a query with the given initial bindings, and initial namespaces. The given base is used to resolve relative URIs in the query and will be overridden by any BASE given in the query.

class rdflib.plugins.sparql.processor.SPARQLResult(res)[source]

Bases: Result

__init__(res)[source]
__module__ = 'rdflib.plugins.sparql.processor'
askAnswer: bool
graph: Graph
vars: Optional[List['Variable']]
class rdflib.plugins.sparql.processor.SPARQLUpdateProcessor(graph)[source]

Bases: UpdateProcessor

__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
update(strOrQuery, initBindings={}, initNs={})[source]
rdflib.plugins.sparql.processor.prepareQuery(queryString, initNs={}, base=None)[source]

Parse and translate a SPARQL Query

Return type:

Query

rdflib.plugins.sparql.processor.prepareUpdate(updateString, initNs={}, base=None)[source]

Parse and translate a SPARQL Update

rdflib.plugins.sparql.processor.processUpdate(graph, updateString, initBindings={}, initNs={}, base=None)[source]

Process a SPARQL Update Request returns Nothing on success or raises Exceptions on error

rdflib.plugins.sparql.sparql module

exception rdflib.plugins.sparql.sparql.AlreadyBound[source]

Bases: SPARQLError

Raised when trying to bind a variable that is already bound!

__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Bindings(outer=None, d=[])[source]

Bases: MutableMapping

A single level of a stack of variable-value bindings. Each dict keeps a reference to the dict below it, any failed lookup is propegated back

In python 3.3 this could be a collections.ChainMap

Parameters:

outer (Optional[Bindings]) –

__abstractmethods__ = frozenset({})
__contains__(key)[source]
Parameters:

key (Any) –

Return type:

bool

__delitem__(key)[source]
Parameters:

key (str) –

Return type:

None

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n\n    A single level of a stack of variable-value bindings.\n    Each dict keeps a reference to the dict below it,\n    any failed lookup is propegated back\n\n    In python 3.3 this could be a collections.ChainMap\n    ', '__init__': <function Bindings.__init__>, '__getitem__': <function Bindings.__getitem__>, '__contains__': <function Bindings.__contains__>, '__setitem__': <function Bindings.__setitem__>, '__delitem__': <function Bindings.__delitem__>, '__len__': <function Bindings.__len__>, '__iter__': <function Bindings.__iter__>, '__str__': <function Bindings.__str__>, '__repr__': <function Bindings.__repr__>, '__dict__': <attribute '__dict__' of 'Bindings' objects>, '__weakref__': <attribute '__weakref__' of 'Bindings' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>, '__annotations__': {'_d': 'Dict[str, str]'}})
__getitem__(key)[source]
Parameters:

key (str) –

Return type:

str

__init__(outer=None, d=[])[source]
Parameters:

outer (Optional[Bindings]) –

__iter__()[source]
__len__()[source]
Return type:

int

__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]

Return repr(self).

Return type:

str

__setitem__(key, value)[source]
Parameters:
  • key (str) –

  • value (Any) –

Return type:

None

__str__()[source]

Return str(self).

Return type:

str

__weakref__

list of weak references to the object (if defined)

class rdflib.plugins.sparql.sparql.FrozenBindings(ctx, *args, **kwargs)[source]

Bases: FrozenDict

Parameters:

ctx (QueryContext) –

__abstractmethods__ = frozenset({})
__getitem__(key)[source]
Parameters:

key (Union[Identifier, str]) –

Return type:

Identifier

__init__(ctx, *args, **kwargs)[source]
Parameters:

ctx (QueryContext) –

__module__ = 'rdflib.plugins.sparql.sparql'
property bnodes: Mapping[Identifier, BNode]
Return type:

Mapping[Identifier, BNode]

forget(before, _except=None)[source]

return a frozen dict only of bindings made in self since before

Parameters:
merge(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

FrozenBindings

property now: datetime
Return type:

datetime

project(vars)[source]
Parameters:

vars (Container[Variable]) –

Return type:

FrozenBindings

property prologue: Optional[Prologue]
Return type:

Optional[Prologue]

remember(these)[source]

return a frozen dict only of bindings in these

class rdflib.plugins.sparql.sparql.FrozenDict(*args, **kwargs)[source]

Bases: Mapping

An immutable hashable dict

Taken from http://stackoverflow.com/a/2704866/81121

Parameters:
  • args (Any) –

  • kwargs (Any) –

__abstractmethods__ = frozenset({})
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    An immutable hashable dict\n\n    Taken from http://stackoverflow.com/a/2704866/81121\n\n    ', '__init__': <function FrozenDict.__init__>, '__iter__': <function FrozenDict.__iter__>, '__len__': <function FrozenDict.__len__>, '__getitem__': <function FrozenDict.__getitem__>, '__hash__': <function FrozenDict.__hash__>, 'project': <function FrozenDict.project>, 'disjointDomain': <function FrozenDict.disjointDomain>, 'compatible': <function FrozenDict.compatible>, 'merge': <function FrozenDict.merge>, '__str__': <function FrozenDict.__str__>, '__repr__': <function FrozenDict.__repr__>, '__dict__': <attribute '__dict__' of 'FrozenDict' objects>, '__weakref__': <attribute '__weakref__' of 'FrozenDict' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>, '__annotations__': {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}})
__getitem__(key)[source]
Parameters:

key (Identifier) –

Return type:

Identifier

__hash__()[source]

Return hash(self).

Return type:

int

__init__(*args, **kwargs)[source]
Parameters:
  • args (Any) –

  • kwargs (Any) –

__iter__()[source]
__len__()[source]
Return type:

int

__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]

Return repr(self).

Return type:

str

__str__()[source]

Return str(self).

Return type:

str

__weakref__

list of weak references to the object (if defined)

compatible(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

bool

disjointDomain(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

bool

merge(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

FrozenDict

project(vars)[source]
Parameters:

vars (Container[Variable]) –

Return type:

FrozenDict

exception rdflib.plugins.sparql.sparql.NotBoundError(msg=None)[source]

Bases: SPARQLError

Parameters:

msg (Optional[str]) –

__init__(msg=None)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Prologue[source]

Bases: object

A class for holding prefixing bindings and base URI information

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A class for holding prefixing bindings and base URI information\n    ', '__init__': <function Prologue.__init__>, 'resolvePName': <function Prologue.resolvePName>, 'bind': <function Prologue.bind>, 'absolutize': <function Prologue.absolutize>, '__dict__': <attribute '__dict__' of 'Prologue' objects>, '__weakref__': <attribute '__weakref__' of 'Prologue' objects>, '__annotations__': {'base': 'Optional[str]'}})
__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

absolutize(iri)[source]

Apply BASE / PREFIXes to URIs (and to datatypes in Literals)

TODO: Move resolving URIs to pre-processing

Parameters:

iri (Union[CompValue, str, None]) –

Return type:

Union[CompValue, str, None]

bind(prefix, uri)[source]
Parameters:
Return type:

None

resolvePName(prefix, localname)[source]
Parameters:
Return type:

URIRef

class rdflib.plugins.sparql.sparql.Query(prologue, algebra)[source]

Bases: object

A parsed and translated query

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A parsed and translated query\n    ', '__init__': <function Query.__init__>, '__dict__': <attribute '__dict__' of 'Query' objects>, '__weakref__': <attribute '__weakref__' of 'Query' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})
__init__(prologue, algebra)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

class rdflib.plugins.sparql.sparql.QueryContext(graph=None, bindings=None, initBindings=None)[source]

Bases: object

Query context - passed along when evaluating the query

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    Query context - passed along when evaluating the query\n    ', '__init__': <function QueryContext.__init__>, 'now': <property object>, 'clone': <function QueryContext.clone>, 'dataset': <property object>, 'load': <function QueryContext.load>, '__getitem__': <function QueryContext.__getitem__>, 'get': <function QueryContext.get>, 'solution': <function QueryContext.solution>, '__setitem__': <function QueryContext.__setitem__>, 'pushGraph': <function QueryContext.pushGraph>, 'push': <function QueryContext.push>, 'clean': <function QueryContext.clean>, 'thaw': <function QueryContext.thaw>, '__dict__': <attribute '__dict__' of 'QueryContext' objects>, '__weakref__': <attribute '__weakref__' of 'QueryContext' objects>, '__annotations__': {'graph': 'Optional[Graph]', '_dataset': 'Optional[ConjunctiveGraph]', 'prologue': 'Optional[Prologue]', '_now': 'Optional[datetime.datetime]', 'bnodes': 't.MutableMapping[Identifier, BNode]'}})
__getitem__(key)[source]
Return type:

Any

__init__(graph=None, bindings=None, initBindings=None)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__setitem__(key, value)[source]
Parameters:
Return type:

None

__weakref__

list of weak references to the object (if defined)

clean()[source]
Return type:

QueryContext

clone(bindings=None)[source]
Parameters:

bindings (Union[Bindings, FrozenBindings, List[Any], None]) –

Return type:

QueryContext

property dataset: ConjunctiveGraph

“current dataset

Return type:

ConjunctiveGraph

get(key, default=None)[source]
Parameters:
load(source, default=False, **kwargs)[source]
Parameters:
property now: datetime
Return type:

datetime

push()[source]
Return type:

QueryContext

pushGraph(graph)[source]
Parameters:

graph (Optional[Graph]) –

Return type:

QueryContext

solution(vars=None)[source]

Return a static copy of the current variable bindings as dict

Parameters:

vars (Optional[Iterable[Variable]]) –

Return type:

FrozenBindings

thaw(frozenbindings)[source]

Create a new read/write query context from the given solution

Parameters:

frozenbindings (FrozenBindings) –

Return type:

QueryContext

exception rdflib.plugins.sparql.sparql.SPARQLError(msg=None)[source]

Bases: Exception

Parameters:

msg (Optional[str]) –

__init__(msg=None)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

exception rdflib.plugins.sparql.sparql.SPARQLTypeError(msg)[source]

Bases: SPARQLError

Parameters:

msg (Optional[str]) –

__init__(msg)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Update(prologue, algebra)[source]

Bases: object

A parsed and translated update

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A parsed and translated update\n    ', '__init__': <function Update.__init__>, '__dict__': <attribute '__dict__' of 'Update' objects>, '__weakref__': <attribute '__weakref__' of 'Update' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})
__init__(prologue, algebra)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.update module

Code for carrying out Update Operations

rdflib.plugins.sparql.update.evalAdd(ctx, u)[source]

add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#add

rdflib.plugins.sparql.update.evalClear(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#clear

rdflib.plugins.sparql.update.evalCopy(ctx, u)[source]

remove all triples from dst add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#copy

rdflib.plugins.sparql.update.evalCreate(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#create

rdflib.plugins.sparql.update.evalDeleteData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteData

rdflib.plugins.sparql.update.evalDeleteWhere(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteWhere

rdflib.plugins.sparql.update.evalDrop(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#drop

rdflib.plugins.sparql.update.evalInsertData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#insertData

rdflib.plugins.sparql.update.evalLoad(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#load

rdflib.plugins.sparql.update.evalModify(ctx, u)[source]
rdflib.plugins.sparql.update.evalMove(ctx, u)[source]

remove all triples from dst add all triples from src to dst remove all triples from src

http://www.w3.org/TR/sparql11-update/#move

rdflib.plugins.sparql.update.evalUpdate(graph, update, initBindings={})[source]

http://www.w3.org/TR/sparql11-update/#updateLanguage

‘A request is a sequence of operations […] Implementations MUST ensure that operations of a single request are executed in a fashion that guarantees the same effects as executing them in lexical order.

Operations all result either in success or failure.

If multiple operations are present in a single request, then a result of failure from any operation MUST abort the sequence of operations, causing the subsequent operations to be ignored.’

This will return None on success and raise Exceptions on error

Module contents

SPARQL implementation for RDFLib

New in version 4.0.

rdflib.plugins.sparql.CUSTOM_EVALS = {}

Custom evaluation functions

These must be functions taking (ctx, part) and raise NotImplementedError if they cannot handle a certain part

rdflib.plugins.sparql.SPARQL_DEFAULT_GRAPH_UNION = True

If True - the default graph in the RDF Dataset is the union of all named graphs (like RDFLib’s ConjunctiveGraph)

rdflib.plugins.sparql.SPARQL_LOAD_GRAPHS = True

If True, using FROM <uri> and FROM NAMED <uri> will load/parse more data