rdflib.plugins.sparql package¶
Subpackages¶
- rdflib.plugins.sparql.results package
- Submodules
- rdflib.plugins.sparql.results.csvresults module
- rdflib.plugins.sparql.results.graph module
- rdflib.plugins.sparql.results.jsonresults module
- rdflib.plugins.sparql.results.rdfresults module
- rdflib.plugins.sparql.results.tsvresults module
- rdflib.plugins.sparql.results.txtresults module
- rdflib.plugins.sparql.results.xmlresults module
- Module contents
Submodules¶
rdflib.plugins.sparql.aggregates module¶
-
class
rdflib.plugins.sparql.aggregates.
Accumulator
(aggregation)[source]¶ Bases:
object
abstract base class for different aggregation functions
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'abstract base class for different aggregation functions ', '__init__': <function Accumulator.__init__>, 'dont_care': <function Accumulator.dont_care>, 'use_row': <function Accumulator.use_row>, 'set_value': <function Accumulator.set_value>, '__dict__': <attribute '__dict__' of 'Accumulator' objects>, '__weakref__': <attribute '__weakref__' of 'Accumulator' objects>})¶
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
rdflib.plugins.sparql.aggregates.
Aggregator
(aggregations)[source]¶ Bases:
object
combines different Accumulator objects
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'combines different Accumulator objects', 'accumulator_classes': {'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>}, '__init__': <function Aggregator.__init__>, 'update': <function Aggregator.update>, 'get_bindings': <function Aggregator.get_bindings>, '__dict__': <attribute '__dict__' of 'Aggregator' objects>, '__weakref__': <attribute '__weakref__' of 'Aggregator' objects>})¶
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
accumulator_classes
= {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>}¶
-
-
class
rdflib.plugins.sparql.aggregates.
Average
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Counter
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Extremum
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
abstract base class for Minimum and Maximum
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
GroupConcat
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Maximum
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Extremum
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Minimum
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Extremum
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Sample
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
takes the first eligable value
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
-
class
rdflib.plugins.sparql.aggregates.
Sum
(aggregation)[source]¶ Bases:
rdflib.plugins.sparql.aggregates.Accumulator
-
__module__
= 'rdflib.plugins.sparql.aggregates'¶
-
rdflib.plugins.sparql.algebra module¶
Converting the ‘parse-tree’ output of pyparsing to a SPARQL Algebra expression
http://www.w3.org/TR/sparql11-query/#sparqlQuery
-
exception
rdflib.plugins.sparql.algebra.
StopTraversal
(rv)[source]¶ Bases:
Exception
-
__module__
= 'rdflib.plugins.sparql.algebra'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
rdflib.plugins.sparql.algebra.
analyse
(n, children)[source]¶ Some things can be lazily joined. This propegates whether they can up the tree and sets lazy flags for all joins
-
rdflib.plugins.sparql.algebra.
collectAndRemoveFilters
(parts)[source]¶ FILTER expressions apply to the whole group graph pattern in which they appear.
-
rdflib.plugins.sparql.algebra.
reorderTriples
(l)[source]¶ Reorder triple patterns so that we execute the ones with most bindings first
-
rdflib.plugins.sparql.algebra.
translateExists
(e)[source]¶ Translate the graph pattern used by EXISTS and NOT EXISTS http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters
-
rdflib.plugins.sparql.algebra.
translateQuery
(q, base=None, initNs=None)[source]¶ Translate a query-parsetree to a SPARQL Algebra Expression
Return a rdflib.plugins.sparql.sparql.Query object
-
rdflib.plugins.sparql.algebra.
translateUpdate
(q, base=None, initNs=None)[source]¶ Returns a list of SPARQL Update Algebra expressions
-
rdflib.plugins.sparql.algebra.
traverse
(tree, visitPre=<function <lambda>>, visitPost=<function <lambda>>, complete=None)[source]¶ Traverse tree, visit each node with visit function visit function may raise StopTraversal to stop traversal if complete!=None, it is returned on complete traversal, otherwise the transformed tree is returned
rdflib.plugins.sparql.datatypes module¶
Utility functions for supporting the XML Schema Datatypes hierarchy
rdflib.plugins.sparql.evaluate module¶
These method recursively evaluate the SPARQL Algebra
evalQuery is the entry-point, it will setup context and return the SPARQLResult object
evalPart is called on each level and will delegate to the right method
A rdflib.plugins.sparql.sparql.QueryContext is passed along, keeping information needed for evaluation
A list of dicts (solution mappings) is returned, apart from GroupBy which may also return a dict of list of dicts
-
rdflib.plugins.sparql.evaluate.
evalLazyJoin
(ctx, join)[source]¶ A lazy join will push the variables bound in the first part to the second part, essentially doing the join implicitly hopefully evaluating much fewer triples
rdflib.plugins.sparql.evalutils module¶
rdflib.plugins.sparql.operators module¶
This contains evaluation functions for expressions
They get bound as instances-methods to the CompValue objects from parserutils using setEvalFn
-
rdflib.plugins.sparql.operators.
Builtin_LANG
(e, ctx)[source]¶ http://www.w3.org/TR/sparql11-query/#func-lang
Returns the language tag of ltrl, if it has one. It returns “” if ltrl has no language tag. Note that the RDF data model does not include literals with an empty language tag.
-
rdflib.plugins.sparql.operators.
Builtin_REGEX
(expr, ctx)[source]¶ http://www.w3.org/TR/sparql11-query/#func-regex Invokes the XPath fn:matches function to match text against a regular expression pattern. The regular expression language is defined in XQuery 1.0 and XPath 2.0 Functions and Operators section 7.6.1 Regular Expression Syntax
-
rdflib.plugins.sparql.operators.
Builtin_TIMEZONE
(e, ctx)[source]¶ http://www.w3.org/TR/sparql11-query/#func-timezone
- Returns
the timezone part of arg as an xsd:dayTimeDuration.
- Raises
an error if there is no timezone.
-
rdflib.plugins.sparql.operators.
EBV
(rt)[source]¶ If the argument is a typed literal with a datatype of xsd:boolean, the EBV is the value of that argument.
If the argument is a plain literal or a typed literal with a datatype of xsd:string, the EBV is false if the operand value has zero length; otherwise the EBV is true.
If the argument is a numeric type or a typed literal with a datatype derived from a numeric type, the EBV is false if the operand value is NaN or is numerically equal to zero; otherwise the EBV is true.
All other arguments, including unbound arguments, produce a type error.
-
rdflib.plugins.sparql.operators.
custom_function
(uri, override=False, raw=False)[source]¶ Decorator version of
register_custom_function()
.
-
rdflib.plugins.sparql.operators.
numeric
(expr)[source]¶ return a number from a literal http://www.w3.org/TR/xpath20/#promotion
or TypeError
-
rdflib.plugins.sparql.operators.
register_custom_function
(uri, func, override=False, raw=False)[source]¶ Register a custom SPARQL function.
By default, the function will be passed the RDF terms in the argument list. If raw is True, the function will be passed an Expression and a Context.
The function must return an RDF term, or raise a SparqlError.
rdflib.plugins.sparql.parser module¶
SPARQL 1.1 Parser
based on pyparsing
-
rdflib.plugins.sparql.parser.
expandBNodeTriples
(terms)[source]¶ expand [ ?p ?o ] syntax for implicit bnodes
-
rdflib.plugins.sparql.parser.
expandCollection
(terms)[source]¶ expand ( 1 2 3 ) notation for collections
-
rdflib.plugins.sparql.parser.
expandTriples
(terms)[source]¶ Expand ; and , syntax for repeat predicates, subjects
-
rdflib.plugins.sparql.parser.
expandUnicodeEscapes
(q)[source]¶ The syntax of the SPARQL Query Language is expressed over code points in Unicode [UNICODE]. The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an uXXXX (U+0 to U+FFFF) or UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]
rdflib.plugins.sparql.parserutils module¶
-
class
rdflib.plugins.sparql.parserutils.
Comp
(name, expr)[source]¶ Bases:
pyparsing.TokenConverter
A pyparsing token for grouping together things with a label Any sub-tokens that are not Params will be ignored.
Returns CompValue / Expr objects - depending on whether evalFn is set.
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
__slotnames__
= []¶
-
-
class
rdflib.plugins.sparql.parserutils.
CompValue
(name, **values)[source]¶ Bases:
collections.OrderedDict
The result of parsing a Comp Any included Params are avaiable as Dict keys or as attributes
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
-
class
rdflib.plugins.sparql.parserutils.
Expr
(name, evalfn=None, **values)[source]¶ Bases:
rdflib.plugins.sparql.parserutils.CompValue
A CompValue that is evaluatable
-
__init__
(name, evalfn=None, **values)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
-
class
rdflib.plugins.sparql.parserutils.
Param
(name, expr, isList=False)[source]¶ Bases:
pyparsing.TokenConverter
A pyparsing token for labelling a part of the parse-tree if isList is true repeat occurrences of ParamList have their values merged in a list
-
__init__
(name, expr, isList=False)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
__slotnames__
= []¶
-
-
class
rdflib.plugins.sparql.parserutils.
ParamList
(name, expr)[source]¶ Bases:
rdflib.plugins.sparql.parserutils.Param
A shortcut for a Param with isList=True
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
-
class
rdflib.plugins.sparql.parserutils.
ParamValue
(name, tokenList, isList)[source]¶ Bases:
object
The result of parsing a Param This just keeps the name/value All cleverness is in the CompValue
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': '\n The result of parsing a Param\n This just keeps the name/value\n All cleverness is in the CompValue\n ', '__init__': <function ParamValue.__init__>, '__str__': <function ParamValue.__str__>, '__dict__': <attribute '__dict__' of 'ParamValue' objects>, '__weakref__': <attribute '__weakref__' of 'ParamValue' objects>})¶
-
__init__
(name, tokenList, isList)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
rdflib.plugins.sparql.parserutils.
plist
[source]¶ Bases:
list
this is just a list, but we want our own type to check for
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': 'this is just a list, but we want our own type to check for', '__dict__': <attribute '__dict__' of 'plist' objects>, '__weakref__': <attribute '__weakref__' of 'plist' objects>})¶
-
__module__
= 'rdflib.plugins.sparql.parserutils'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
rdflib.plugins.sparql.parserutils.
value
(ctx, val, variables=False, errors=False)[source]¶ utility function for evaluating something…
Variables will be looked up in the context Normally, non-bound vars is an error, set variables=True to return unbound vars
Normally, an error raises the error, set errors=True to return error
rdflib.plugins.sparql.processor module¶
Code for tying SPARQL Engine into RDFLib
These should be automatically registered with RDFLib
-
class
rdflib.plugins.sparql.processor.
SPARQLProcessor
(graph)[source]¶ Bases:
rdflib.query.Processor
-
__module__
= 'rdflib.plugins.sparql.processor'¶
-
-
class
rdflib.plugins.sparql.processor.
SPARQLResult
(res)[source]¶ Bases:
rdflib.query.Result
-
__module__
= 'rdflib.plugins.sparql.processor'¶
-
-
class
rdflib.plugins.sparql.processor.
SPARQLUpdateProcessor
(graph)[source]¶ Bases:
rdflib.query.UpdateProcessor
-
__module__
= 'rdflib.plugins.sparql.processor'¶
-
rdflib.plugins.sparql.sparql module¶
-
exception
rdflib.plugins.sparql.sparql.
AlreadyBound
[source]¶ Bases:
rdflib.plugins.sparql.sparql.SPARQLError
Raised when trying to bind a variable that is already bound!
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
-
class
rdflib.plugins.sparql.sparql.
Bindings
(outer=None, d=[])[source]¶ Bases:
collections.abc.MutableMapping
A single level of a stack of variable-value bindings. Each dict keeps a reference to the dict below it, any failed lookup is propegated back
In python 3.3 this could be a collections.ChainMap
-
__abstractmethods__
= frozenset({})¶
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n\n A single level of a stack of variable-value bindings.\n Each dict keeps a reference to the dict below it,\n any failed lookup is propegated back\n\n In python 3.3 this could be a collections.ChainMap\n ', '__init__': <function Bindings.__init__>, '__getitem__': <function Bindings.__getitem__>, '__contains__': <function Bindings.__contains__>, '__setitem__': <function Bindings.__setitem__>, '__delitem__': <function Bindings.__delitem__>, '__len__': <function Bindings.__len__>, '__iter__': <function Bindings.__iter__>, '__str__': <function Bindings.__str__>, '__repr__': <function Bindings.__repr__>, '__dict__': <attribute '__dict__' of 'Bindings' objects>, '__weakref__': <attribute '__weakref__' of 'Bindings' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>})¶
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
rdflib.plugins.sparql.sparql.
FrozenBindings
(ctx, *args, **kwargs)[source]¶ Bases:
rdflib.plugins.sparql.sparql.FrozenDict
-
__abstractmethods__
= frozenset({})¶
-
__init__
(ctx, *args, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
property
bnodes
¶
-
forget
(before, _except=None)[source]¶ return a frozen dict only of bindings made in self since before
-
property
now
¶
-
property
prologue
¶
-
-
class
rdflib.plugins.sparql.sparql.
FrozenDict
(*args, **kwargs)[source]¶ Bases:
collections.abc.Mapping
An immutable hashable dict
Taken from http://stackoverflow.com/a/2704866/81121
-
__abstractmethods__
= frozenset({})¶
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n An immutable hashable dict\n\n Taken from http://stackoverflow.com/a/2704866/81121\n\n ', '__init__': <function FrozenDict.__init__>, '__iter__': <function FrozenDict.__iter__>, '__len__': <function FrozenDict.__len__>, '__getitem__': <function FrozenDict.__getitem__>, '__hash__': <function FrozenDict.__hash__>, 'project': <function FrozenDict.project>, 'disjointDomain': <function FrozenDict.disjointDomain>, 'compatible': <function FrozenDict.compatible>, 'merge': <function FrozenDict.merge>, '__str__': <function FrozenDict.__str__>, '__repr__': <function FrozenDict.__repr__>, '__dict__': <attribute '__dict__' of 'FrozenDict' objects>, '__weakref__': <attribute '__weakref__' of 'FrozenDict' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>})¶
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
exception
rdflib.plugins.sparql.sparql.
NotBoundError
(msg=None)[source]¶ Bases:
rdflib.plugins.sparql.sparql.SPARQLError
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
-
class
rdflib.plugins.sparql.sparql.
Prologue
[source]¶ Bases:
object
A class for holding prefixing bindings and base URI information
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n A class for holding prefixing bindings and base URI information\n ', '__init__': <function Prologue.__init__>, 'resolvePName': <function Prologue.resolvePName>, 'bind': <function Prologue.bind>, 'absolutize': <function Prologue.absolutize>, '__dict__': <attribute '__dict__' of 'Prologue' objects>, '__weakref__': <attribute '__weakref__' of 'Prologue' objects>})¶
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
rdflib.plugins.sparql.sparql.
Query
(prologue, algebra)[source]¶ Bases:
object
A parsed and translated query
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n A parsed and translated query\n ', '__init__': <function Query.__init__>, '__dict__': <attribute '__dict__' of 'Query' objects>, '__weakref__': <attribute '__weakref__' of 'Query' objects>})¶
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
-
class
rdflib.plugins.sparql.sparql.
QueryContext
(graph=None, bindings=None, initBindings=None)[source]¶ Bases:
object
Query context - passed along when evaluating the query
-
__dict__
= mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n Query context - passed along when evaluating the query\n ', '__init__': <function QueryContext.__init__>, 'clone': <function QueryContext.clone>, '_get_dataset': <function QueryContext._get_dataset>, 'dataset': <property object>, 'load': <function QueryContext.load>, '__getitem__': <function QueryContext.__getitem__>, 'get': <function QueryContext.get>, 'solution': <function QueryContext.solution>, '__setitem__': <function QueryContext.__setitem__>, 'pushGraph': <function QueryContext.pushGraph>, 'push': <function QueryContext.push>, 'clean': <function QueryContext.clean>, 'thaw': <function QueryContext.thaw>, '__dict__': <attribute '__dict__' of 'QueryContext' objects>, '__weakref__': <attribute '__weakref__' of 'QueryContext' objects>})¶
-
__init__
(graph=None, bindings=None, initBindings=None)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
__module__
= 'rdflib.plugins.sparql.sparql'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
property
dataset
¶ current dataset
-
rdflib.plugins.sparql.update module¶
Code for carrying out Update Operations
-
rdflib.plugins.sparql.update.
evalCopy
(ctx, u)[source]¶ remove all triples from dst add all triples from src to dst
-
rdflib.plugins.sparql.update.
evalMove
(ctx, u)[source]¶ remove all triples from dst add all triples from src to dst remove all triples from src
-
rdflib.plugins.sparql.update.
evalUpdate
(graph, update, initBindings={})[source]¶ http://www.w3.org/TR/sparql11-update/#updateLanguage
‘A request is a sequence of operations […] Implementations MUST ensure that operations of a single request are executed in a fashion that guarantees the same effects as executing them in lexical order.
Operations all result either in success or failure.
If multiple operations are present in a single request, then a result of failure from any operation MUST abort the sequence of operations, causing the subsequent operations to be ignored.’
This will return None on success and raise Exceptions on error
Module contents¶
SPARQL implementation for RDFLib
New in version 4.0.
-
rdflib.plugins.sparql.
CUSTOM_EVALS
= {}¶ Custom evaluation functions
These must be functions taking (ctx, part) and raise NotImplementedError if they cannot handle a certain part
-
rdflib.plugins.sparql.
SPARQL_DEFAULT_GRAPH_UNION
= True¶ If True - the default graph in the RDF Dataset is the union of all named graphs (like RDFLib’s ConjunctiveGraph)
-
rdflib.plugins.sparql.
SPARQL_LOAD_GRAPHS
= True¶ If True, using FROM <uri> and FROM NAMED <uri> will load/parse more data