This document is intended as a reference guide to the full syntax and semantics of the SQL++ Query Language, a SQL-inspired language for working with semistructured data. SQL++ has much in common with SQL, but some differences do exist due to the different data models that the two languages were designed to serve. SQL was designed in the 1970’s for interacting with the flat, schema-ified world of relational databases, while SQL++ is much newer and targets the nested, schema-optional (or even schema-less) world of modern NoSQL systems.
In the context of Apache AsterixDB, SQL++ is intended for working with the Asterix Data Model (ADM),a data model based on a superset of JSON with an enriched and flexible type system. New AsterixDB users are encouraged to read and work through the (much friendlier) guide “AsterixDB 101: An ADM and SQL++ Primer” before attempting to make use of this document. In addition, readers are advised to read through the Asterix Data Model (ADM) reference guide first as well, as an understanding of the data model is a prerequisite to understanding SQL++.
In what follows, we detail the features of the SQL++ language in a grammar-guided manner. We list and briefly explain each of the productions in the SQL++ grammar, offering examples (and results) for clarity.
SQL++ is a highly composable expression language. Each SQL++ expression returns zero or more data model instances. There are three major kinds of expressions in SQL++. At the topmost level, a SQL++ expression can be an OperatorExpression (similar to a mathematical expression), an ConditionalExpression (to choose between alternative values), or a QuantifiedExpression (which yields a boolean value). Each will be detailed as we explore the full SQL++ grammar.
Expression ::= OperatorExpression | CaseExpression | QuantifiedExpression
Note that in the following text, words enclosed in angle brackets denote keywords that are not case-sensitive.
Operators perform a specific operation on the input values or expressions. The syntax of an operator expression is as follows:
OperatorExpression ::= PathExpression | Operator OperatorExpression | OperatorExpression Operator (OperatorExpression)? | OperatorExpression <BETWEEN> OperatorExpression <AND> OperatorExpression
SQL++ provides a full set of operators that you can use within its statements. Here are the categories of operators:
The following table summarizes the precedence order (from higher to lower) of the major unary and binary operators:
Operator | Operation |
---|---|
EXISTS, NOT EXISTS | Collection emptiness testing |
^ | Exponentiation |
*, /, % | Multiplication, division, modulo |
+, - | Addition, subtraction |
|| | String concatenation |
IS NULL, IS NOT NULL, IS MISSING, IS NOT MISSING, IS UNKNOWN, IS NOT UNKNOWN |
Unknown value comparison |
BETWEEN, NOT BETWEEN | Range comparison (inclusive on both sides) |
=, !=, <>, <, >, <=, >=, LIKE, NOT LIKE, IN, NOT IN | Comparison |
NOT | Logical negation |
AND | Conjunction |
OR | Disjunction |
In general, if any operand evaluates to a MISSING value, the enclosing operator will return MISSING; if none of operands evaluates to a MISSING value but there is an operand evaluates to a NULL value, the enclosing operator will return NULL. However, there are a few exceptions listed in comparison operators and logical operators.
Arithmetic operators are used to exponentiate, add, subtract, multiply, and divide numeric values, or concatenate string values.
Operator | Purpose | Example |
---|---|---|
+, - | As unary operators, they denote a positive or negative expression |
SELECT VALUE -1; |
+, - | As binary operators, they add or subtract | SELECT VALUE 1 + 2; |
*, /, % | Multiply, divide, modulo | SELECT VALUE 4 / 2.0; |
^ | Exponentiation | SELECT VALUE 2^3; |
|| | String concatenation | SELECT VALUE “ab”||“c”||“d”; |
Collection operators are used for membership tests (IN, NOT IN) or empty collection tests (EXISTS, NOT EXISTS).
Operator | Purpose | Example |
---|---|---|
IN | Membership test | SELECT * FROM ChirpMessages cm WHERE cm.user.lang IN [“en”, “de”]; |
NOT IN | Non-membership test | SELECT * FROM ChirpMessages cm WHERE cm.user.lang NOT IN [“en”]; |
EXISTS | Check whether a collection is not empty | SELECT * FROM ChirpMessages cm WHERE EXISTS cm.referredTopics; |
NOT EXISTS | Check whether a collection is empty | SELECT * FROM ChirpMessages cm WHERE NOT EXISTS cm.referredTopics; |
Comparison operators are used to compare values. The comparison operators fall into one of two sub-categories: missing value comparisons and regular value comparisons. SQL++ (and JSON) has two ways of representing missing information in a object - the presence of the field with a NULL for its value (as in SQL), and the absence of the field (which JSON permits). For example, the first of the following objects represents Jack, whose friend is Jill. In the other examples, Jake is friendless a la SQL, with a friend field that is NULL, while Joe is friendless in a more natural (for JSON) way, i.e., by not having a friend field.
{“name”: “Jack”, “friend”: “Jill”}
{“name”: “Jake”, “friend”: NULL}
{“name”: “Joe”}
The following table enumerates all of SQL++’s comparison operators.
Operator | Purpose | Example |
---|---|---|
IS NULL | Test if a value is NULL | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NULL; |
IS NOT NULL | Test if a value is not NULL | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT NULL; |
IS MISSING | Test if a value is MISSING | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS MISSING; |
IS NOT MISSING | Test if a value is not MISSING | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT MISSING; |
IS UNKNOWN | Test if a value is NULL or MISSING | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS UNKNOWN; |
IS NOT UNKNOWN | Test if a value is neither NULL nor MISSING | SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT UNKNOWN; |
BETWEEN | Test if a value is between a start value and a end value. The comparison is inclusive to both start and end values. |
SELECT * FROM ChirpMessages cm WHERE cm.chirpId BETWEEN 10 AND 20; |
= | Equality test | SELECT * FROM ChirpMessages cm WHERE cm.chirpId=10; |
!= | Inequality test | SELECT * FROM ChirpMessages cm WHERE cm.chirpId!=10; |
<> | Inequality test | SELECT * FROM ChirpMessages cm WHERE cm.chirpId<>10; |
< | Less than | SELECT * FROM ChirpMessages cm WHERE cm.chirpId<10; |
> | Greater than | SELECT * FROM ChirpMessages cm WHERE cm.chirpId>10; |
<= | Less than or equal to | SELECT * FROM ChirpMessages cm WHERE cm.chirpId<=10; |
>= | Greater than or equal to | SELECT * FROM ChirpMessages cm WHERE cm.chirpId>=10; |
LIKE | Test if the left side matches a pattern defined on the right side; in the pattern, “%” matches any string while “_” matches any character. |
SELECT * FROM ChirpMessages cm WHERE cm.user.name LIKE “%Giesen%”; |
NOT LIKE | Test if the left side does not match a pattern defined on the right side; in the pattern, “%” matches any string while “_” matches any character. |
SELECT * FROM ChirpMessages cm WHERE cm.user.name NOT LIKE “%Giesen%”; |
The following table summarizes how the missing value comparison operators work.
Operator | Non-NULL/Non-MISSING value | NULL | MISSING |
---|---|---|---|
IS NULL | FALSE | TRUE | MISSING |
IS NOT NULL | TRUE | FALSE | MISSING |
IS MISSING | FALSE | FALSE | TRUE |
IS NOT MISSING | TRUE | TRUE | FALSE |
IS UNKNOWN | FALSE | TRUE | TRUE |
IS NOT UNKNOWN | TRUE | FALSE | FALSE |
Logical operators perform logical NOT, AND, and OR operations over Boolean values (TRUE and FALSE) plus NULL and MISSING.
Operator | Purpose | Example |
---|---|---|
NOT | Returns true if the following condition is false, otherwise returns false | SELECT VALUE NOT TRUE; |
AND | Returns true if both branches are true, otherwise returns false | SELECT VALUE TRUE AND FALSE; |
OR | Returns true if one branch is true, otherwise returns false | SELECT VALUE FALSE OR FALSE; |
The following table is the truth table for AND and OR.
A | B | A AND B | A OR B |
---|---|---|---|
TRUE | TRUE | TRUE | TRUE |
TRUE | FALSE | FALSE | TRUE |
TRUE | NULL | NULL | TRUE |
TRUE | MISSING | MISSING | TRUE |
FALSE | FALSE | FALSE | FALSE |
FALSE | NULL | FALSE | NULL |
FALSE | MISSING | FALSE | MISSING |
NULL | NULL | NULL | NULL |
NULL | MISSING | MISSING | NULL |
MISSING | MISSING | MISSING | MISSING |
The following table demonstrates the results of NOT on all possible inputs.
A | NOT A |
---|---|
TRUE | FALSE |
FALSE | TRUE |
NULL | NULL |
MISSING | MISSING |
CaseExpression ::= SimpleCaseExpression | SearchedCaseExpression SimpleCaseExpression ::= <CASE> Expression ( <WHEN> Expression <THEN> Expression )+ ( <ELSE> Expression )? <END> SearchedCaseExpression ::= <CASE> ( <WHEN> Expression <THEN> Expression )+ ( <ELSE> Expression )? <END>
In a simple CASE expression, the query evaluator searches for the first WHEN … THEN pair in which the WHEN expression is equal to the expression following CASE and returns the expression following THEN. If none of the WHEN … THEN pairs meet this condition, and an ELSE branch exists, it returns the ELSE expression. Otherwise, NULL is returned.
In a searched CASE expression, the query evaluator searches from left to right until it finds a WHEN expression that is evaluated to TRUE, and then returns its corresponding THEN expression. If no condition is found to be TRUE, and an ELSE branch exists, it returns the ELSE expression. Otherwise, it returns NULL.
The following example illustrates the form of a case expression.
QuantifiedExpression ::= ( (<ANY>|<SOME>) | <EVERY> ) Variable <IN> Expression ( "," Variable "in" Expression )* <SATISFIES> Expression (<END>)?
Quantified expressions are used for expressing existential or universal predicates involving the elements of a collection.
The following pair of examples illustrate the use of a quantified expression to test that every (or some) element in the set [1, 2, 3] of integers is less than three. The first example yields FALSE and second example yields TRUE.
It is useful to note that if the set were instead the empty set, the first expression would yield TRUE (“every” value in an empty set satisfies the condition) while the second expression would yield FALSE (since there isn’t “some” value, as there are no values in the set, that satisfies the condition).
A quantified expression will return a NULL (or MISSING) if the first expression in it evaluates to NULL (or MISSING). A type error will be raised if the first expression in a quantified expression does not return a collection.
PathExpression ::= PrimaryExpression ( Field | Index )* Field ::= "." Identifier Index ::= "[" ( Expression | "?" ) "]"
Components of complex types in the data model are accessed via path expressions. Path access can be applied to the result of a SQL++ expression that yields an instance of a complex type, for example, a object or array instance. For objects, path access is based on field names. For arrays, path access is based on (zero-based) array-style indexing. SQL++ also supports an “I’m feeling lucky” style index accessor, [?], for selecting an arbitrary element from an array. Attempts to access non-existent fields or out-of-bound array elements produce the special value MISSING. Type errors will be raised for inappropriate use of a path expression, such as applying a field accessor to a numeric value.
The following examples illustrate field access for a object, index-based element access for an array, and also a composition thereof.
PrimaryExpr ::= Literal | VariableReference | ParenthesizedExpression | FunctionCallExpression | Constructor
The most basic building block for any SQL++ expression is PrimaryExpression. This can be a simple literal (constant) value, a reference to a query variable that is in scope, a parenthesized expression, a function call, or a newly constructed instance of the data model (such as a newly constructed object, array, or multiset of data model instances).
Literal ::= StringLiteral | IntegerLiteral | FloatLiteral | DoubleLiteral | <NULL> | <MISSING> | <TRUE> | <FALSE> StringLiteral ::= "\"" ( <EscapeQuot> | <EscapeBslash> | <EscapeSlash> | <EscapeBspace> | <EscapeFormf> | <EscapeNl> | <EscapeCr> | <EscapeTab> | ~["\"","\\"])* "\"" | "\'"( <EscapeApos> | <EscapeBslash> | <EscapeSlash> | <EscapeBspace> | <EscapeFormf> | <EscapeNl> | <EscapeCr> | <EscapeTab> | ~["\'","\\"])* "\'" <ESCAPE_Apos> ::= "\\\'" <ESCAPE_Quot> ::= "\\\"" <EscapeBslash> ::= "\\\\" <EscapeSlash> ::= "\\/" <EscapeBspace> ::= "\\b" <EscapeFormf> ::= "\\f" <EscapeNl> ::= "\\n" <EscapeCr> ::= "\\r" <EscapeTab> ::= "\\t" IntegerLiteral ::= <DIGITS> <DIGITS> ::= ["0" - "9"]+ FloatLiteral ::= <DIGITS> ( "f" | "F" ) | <DIGITS> ( "." <DIGITS> ( "f" | "F" ) )? | "." <DIGITS> ( "f" | "F" ) DoubleLiteral ::= <DIGITS> "." <DIGITS> | "." <DIGITS>
Literals (constants) in SQL++ can be strings, integers, floating point values, double values, boolean constants, or special constant values like NULL and MISSING. The NULL value is like a NULL in SQL; it is used to represent an unknown field value. The specialy value MISSING is only meaningful in the context of SQL++ field accesses; it occurs when the accessed field simply does not exist at all in a object being accessed.
The following are some simple examples of SQL++ literals.
VariableReference ::= <IDENTIFIER>|<DelimitedIdentifier> <IDENTIFIER> ::= <LETTER> (<LETTER> | <DIGIT> | "_" | "$")* <LETTER> ::= ["A" - "Z", "a" - "z"] DelimitedIdentifier ::= "`" (<EscapeQuot> | <EscapeBslash> | <EscapeSlash> | <EscapeBspace> | <EscapeFormf> | <EscapeNl> | <EscapeCr> | <EscapeTab> | ~["`","\\"])* "`"
A variable in SQL++ can be bound to any legal data model value. A variable reference refers to the value to which an in-scope variable is bound. (E.g., a variable binding may originate from one of the FROM, WITH or LET clauses of a SELECT statement or from an input parameter in the context of a function body.) Backticks, for example, `id`, are used for delimited identifiers. Delimiting is needed when a variable’s desired name clashes with a SQL++ keyword or includes characters not allowed in regular identifiers.
ParenthesizedExpression ::= "(" Expression ")" | Subquery
An expression can be parenthesized to control the precedence order or otherwise clarify a query. In SQL++, for composability, a subquery is also an parenthesized expression.
The following expression evaluates to the value 2.
FunctionCallExpression ::= FunctionName "(" ( Expression ( "," Expression )* )? ")"
Functions are included in SQL++, like most languages, as a way to package useful functionality or to componentize complicated or reusable SQL++ computations. A function call is a legal SQL++ query expression that represents the value resulting from the evaluation of its body expression with the given parameter bindings; the parameter value bindings can themselves be any SQL++ expressions.
The following example is a (built-in) function call expression whose value is 8.
Constructor ::= ArrayConstructor | MultisetConstructor | ObjectConstructor ArrayConstructor ::= "[" ( Expression ( "," Expression )* )? "]" MultisetConstructor ::= "{{" ( Expression ( "," Expression )* )? "}}" ObjectConstructor ::= "{" ( FieldBinding ( "," FieldBinding )* )? "}" FieldBinding ::= Expression ":" Expression
A major feature of SQL++ is its ability to construct new data model instances. This is accomplished using its constructors for each of the model’s complex object structures, namely arrays, multisets, and objects. Arrays are like JSON arrays, while multisets have bag semantics. Objects are built from fields that are field-name/field-value pairs, again like JSON.
The following examples illustrate how to construct a new array with 4 items and a new object with 2 fields respectively. Array elements can be homogeneous (as in the first example), which is the common case, or they may be heterogeneous (as in the second example). The data values and field name values used to construct arrays, multisets, and objects in constructors are all simply SQL++ expressions. Thus, the collection elements, field names, and field values used in constructors can be simple literals or they can come from query variable references or even arbitrarily complex SQL++ expressions (subqueries). Type errors will be raised if the field names in an object are not strings, and duplicate field errors will be raised if they are not distinct.
[ 'a', 'b', 'c', 'c' ] [ 42, "forty-two!", { "rank" : "Captain", "name": "America" }, 3.14159 ] { 'project name': 'Hyracks', 'project members': [ 'vinayakb', 'dtabass', 'chenli', 'tsotras', 'tillw' ] }
A SQL++ query can be any legal SQL++ expression or SELECT statement. A SQL++ query always ends with a semicolon.
Query ::= (Expression | SelectStatement) ";"
The following shows the (rich) grammar for the SELECT statement in SQL++.
SelectStatement ::= ( WithClause )? SelectSetOperation (OrderbyClause )? ( LimitClause )? SelectSetOperation ::= SelectBlock (<UNION> <ALL> ( SelectBlock | Subquery ) )* Subquery ::= "(" SelectStatement ")" SelectBlock ::= SelectClause ( FromClause ( LetClause )?)? ( WhereClause )? ( GroupbyClause ( LetClause )? ( HavingClause )? )? | FromClause ( LetClause )? ( WhereClause )? ( GroupbyClause ( LetClause )? ( HavingClause )? )? SelectClause SelectClause ::= <SELECT> ( <ALL> | <DISTINCT> )? ( SelectRegular | SelectValue ) SelectRegular ::= Projection ( "," Projection )* SelectValue ::= ( <VALUE> | <ELEMENT> | <RAW> ) Expression Projection ::= ( Expression ( <AS> )? Identifier | "*" ) FromClause ::= <FROM> FromTerm ( "," FromTerm )* FromTerm ::= Expression (( <AS> )? Variable)? ( ( JoinType )? ( JoinClause | UnnestClause ) )* JoinClause ::= <JOIN> Expression (( <AS> )? Variable)? <ON> Expression UnnestClause ::= ( <UNNEST> | <CORRELATE> | <FLATTEN> ) Expression ( <AS> )? Variable ( <AT> Variable )? JoinType ::= ( <INNER> | <LEFT> ( <OUTER> )? ) WithClause ::= <WITH> WithElement ( "," WithElement )* LetClause ::= (<LET> | <LETTING>) LetElement ( "," LetElement )* LetElement ::= Variable "=" Expression WithElement ::= Variable <AS> Expression WhereClause ::= <WHERE> Expression GroupbyClause ::= <GROUP> <BY> ( Expression ( (<AS>)? Variable )? ( "," Expression ( (<AS>)? Variable )? )* ( <GROUP> <AS> Variable ("(" Variable <AS> VariableReference ("," Variable <AS> VariableReference )* ")")? )? HavingClause ::= <HAVING> Expression OrderbyClause ::= <ORDER> <BY> Expression ( <ASC> | <DESC> )? ( "," Expression ( <ASC> | <DESC> )? )* LimitClause ::= <LIMIT> Expression ( <OFFSET> Expression )?
In this section, we will make use of two stored collections of objects (datasets), GleambookUsers and GleambookMessages, in a series of running examples to explain SELECT queries. The contents of the example collections are as follows:
GleambookUsers collection (or, dataset):
[ { "id":1, "alias":"Margarita", "name":"MargaritaStoddard", "nickname":"Mags", "userSince":"2012-08-20T10:10:00", "friendIds":[2,3,6,10], "employment":[{ "organizationName":"Codetechno", "start-date":"2006-08-06" }, { "organizationName":"geomedia", "start-date":"2010-06-17", "end-date":"2010-01-26" }], "gender":"F" }, { "id":2, "alias":"Isbel", "name":"IsbelDull", "nickname":"Izzy", "userSince":"2011-01-22T10:10:00", "friendIds":[1,4], "employment":[{ "organizationName":"Hexviafind", "startDate":"2010-04-27" }] }, { "id":3, "alias":"Emory", "name":"EmoryUnk", "userSince":"2012-07-10T10:10:00", "friendIds":[1,5,8,9], "employment":[{ "organizationName":"geomedia", "startDate":"2010-06-17", "endDate":"2010-01-26" }] } ]
GleambookMessages collection (or, dataset):
[ { "messageId":2, "authorId":1, "inResponseTo":4, "senderLocation":[41.66,80.87], "message":" dislike x-phone its touch-screen is horrible" }, { "messageId":3, "authorId":2, "inResponseTo":4, "senderLocation":[48.09,81.01], "message":" like product-y the plan is amazing" }, { "messageId":4, "authorId":1, "inResponseTo":2, "senderLocation":[37.73,97.04], "message":" can't stand acast the network is horrible:(" }, { "messageId":6, "authorId":2, "inResponseTo":1, "senderLocation":[31.5,75.56], "message":" like product-z its platform is mind-blowing" } { "messageId":8, "authorId":1, "inResponseTo":11, "senderLocation":[40.33,80.87], "message":" like ccast the 3G is awesome:)" }, { "messageId":10, "authorId":1, "inResponseTo":12, "senderLocation":[42.5,70.01], "message":" can't stand product-w the touch-screen is terrible" }, { "messageId":11, "authorId":1, "inResponseTo":1, "senderLocation":[38.97,77.49], "message":" can't stand acast its plan is terrible" } ]
The SQL++ SELECT clause always returns a collection value as its result (even if the result is empty or a singleton).
The SELECT VALUE clause in SQL++ returns an array or multiset that contains the results of evaluating the VALUE expression, with one evaluation being performed per “binding tuple” (i.e., per FROM clause item) satisfying the statement’s selection criteria. For historical reasons SQL++ also allows the keywords ELEMENT or RAW to be used in place of VALUE (not recommended).
If there is no FROM clause, the expression after VALUE is evaluated once with no binding tuples (except those inherited from an outer environment).
SELECT VALUE 1;
This query returns:
[ 1 ]
The following example shows a query that selects one user from the GleambookUsers collection.
SELECT VALUE user FROM GleambookUsers user WHERE user.id = 1;
This query returns:
[{ "userSince": "2012-08-20T10:10:00.000Z", "friendIds": [ 2, 3, 6, 10 ], "gender": "F", "name": "MargaritaStoddard", "nickname": "Mags", "alias": "Margarita", "id": 1, "employment": [ { "organizationName": "Codetechno", "start-date": "2006-08-06" }, { "end-date": "2010-01-26", "organizationName": "geomedia", "start-date": "2010-06-17" } ] } ]
In SQL++, the traditional SQL-style SELECT syntax is also supported. This syntax can also be reformulated in a SELECT VALUE based manner in SQL++. (E.g., SELECT expA AS fldA, expB AS fldB is syntactic sugar for SELECT VALUE { 'fldA': expA, 'fldB': expB }.) Unlike in SQL, the result of an SQL++ query does not preserve the order of expressions in the SELECT clause.
In SQL++, SELECT * returns a object with a nested field for each input tuple. Each field has as its field name the name of a binding variable generated by either the FROM clause or GROUP BY clause in the current enclosing SELECT statement, and its field value is the value of that binding variable.
Note that the result of SELECT * is different from the result of query that selects all the fields of an object.
SELECT * FROM GleambookUsers user;
Since user is the only binding variable generated in the FROM clause, this query returns:
[ { "user": { "userSince": "2012-08-20T10:10:00.000Z", "friendIds": [ 2, 3, 6, 10 ], "gender": "F", "name": "MargaritaStoddard", "nickname": "Mags", "alias": "Margarita", "id": 1, "employment": [ { "organizationName": "Codetechno", "start-date": "2006-08-06" }, { "end-date": "2010-01-26", "organizationName": "geomedia", "start-date": "2010-06-17" } ] } }, { "user": { "userSince": "2011-01-22T10:10:00.000Z", "friendIds": [ 1, 4 ], "name": "IsbelDull", "nickname": "Izzy", "alias": "Isbel", "id": 2, "employment": [ { "organizationName": "Hexviafind", "startDate": "2010-04-27" } ] } }, { "user": { "userSince": "2012-07-10T10:10:00.000Z", "friendIds": [ 1, 5, 8, 9 ], "name": "EmoryUnk", "alias": "Emory", "id": 3, "employment": [ { "organizationName": "geomedia", "endDate": "2010-01-26", "startDate": "2010-06-17" } ] } } ]
SELECT * FROM GleambookUsers u, GleambookMessages m WHERE m.authorId = u.id and u.id = 2;
This query does an inner join that we will discuss in multiple from terms. Since both u and m are binding variables generated in the FROM clause, this query returns:
[ { "u": { "userSince": "2011-01-22T10:10:00", "friendIds": [ 1, 4 ], "name": "IsbelDull", "nickname": "Izzy", "alias": "Isbel", "id": 2, "employment": [ { "organizationName": "Hexviafind", "startDate": "2010-04-27" } ] }, "m": { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" } }, { "u": { "userSince": "2011-01-22T10:10:00", "friendIds": [ 1, 4 ], "name": "IsbelDull", "nickname": "Izzy", "alias": "Isbel", "id": 2, "employment": [ { "organizationName": "Hexviafind", "startDate": "2010-04-27" } ] }, "m": { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" } } ]
SQL++’s DISTINCT keyword is used to eliminate duplicate items in results. The following example shows how it works.
Similar to standard SQL, SQL++ supports unnamed projections (a.k.a, unnamed SELECT clause items), for which names are generated. Name generation has three cases:
As in standard SQL, SQL++ field access expressions can be abbreviated (not recommended) when there is no ambiguity. In the next example, the variable user is the only possible variable reference for fields id, name and alias and thus could be omitted in the query.
For each of its input tuples, the UNNEST clause flattens a collection-valued expression into individual items, producing multiple tuples, each of which is one of the expression’s original input tuples augmented with a flattened item from its collection.
The following example is a query that retrieves the names of the organizations that a selected user has worked for. It uses the UNNEST clause to unnest the nested collection employment in the user’s object.
SELECT u.id AS userId, e.organizationName AS orgName FROM GleambookUsers u UNNEST u.employment e WHERE u.id = 1;
This query returns:
[ { "orgName": "Codetechno", "userId": 1 }, { "orgName": "geomedia", "userId": 1 } ]
Note that UNNEST has SQL’s inner join semantics — that is, if a user has no employment history, no tuple corresponding to that user will be emitted in the result.
As an alternative, the LEFT OUTER UNNEST clause offers SQL’s left outer join semantics. For example, no collection-valued field named hobbies exists in the object for the user whose id is 1, but the following query’s result still includes user 1.
SELECT u.id AS userId, h.hobbyName AS hobby FROM GleambookUsers u LEFT OUTER UNNEST u.hobbies h WHERE u.id = 1;
Returns:
[ { "userId": 1 } ]
Note that if u.hobbies is an empty collection or leads to a MISSING (as above) or NULL value for a given input tuple, there is no corresponding binding value for variable h for an input tuple. A MISSING value will be generated for h so that the input tuple can still be propagated.
The SQL++ UNNEST clause is similar to SQL’s JOIN clause except that it allows its right argument to be correlated to its left argument, as in the examples above — i.e., think “correlated cross-product”. The next example shows this via a query that joins two data sets, GleambookUsers and GleambookMessages, returning user/message pairs. The results contain one object per pair, with result objects containing the user’s name and an entire message. The query can be thought of as saying “for each Gleambook user, unnest the GleambookMessages collection and filter the output with the condition message.authorId = user.id”.
SELECT u.name AS uname, m.message AS message FROM GleambookUsers u UNNEST GleambookMessages m WHERE m.authorId = u.id;
This returns:
[ { "uname": "MargaritaStoddard", "message": " can't stand acast its plan is terrible" }, { "uname": "MargaritaStoddard", "message": " dislike x-phone its touch-screen is horrible" }, { "uname": "MargaritaStoddard", "message": " can't stand acast the network is horrible:(" }, { "uname": "MargaritaStoddard", "message": " like ccast the 3G is awesome:)" }, { "uname": "MargaritaStoddard", "message": " can't stand product-w the touch-screen is terrible" }, { "uname": "IsbelDull", "message": " like product-z its platform is mind-blowing" }, { "uname": "IsbelDull", "message": " like product-y the plan is amazing" } ]
Similarly, the above query can also be expressed as the UNNESTing of a correlated SQL++ subquery:
A FROM clause is used for enumerating (i.e., conceptually iterating over) the contents of collections, as in SQL.
In SQL++, in addition to stored collections, a FROM clause can iterate over any intermediate collection returned by a valid SQL++ expression. In the tuple stream generated by a FROM clause, the ordering of the input tuples are not guaranteed to be preserved.
SQL++ permits correlations among FROM terms. Specifically, a FROM binding expression can refer to variables defined to its left in the given FROM clause. Thus, the first unnesting example above could also be expressed as follows:
Similarly, the join intentions of the other UNNEST-based join examples above could be expressed as:
Similar to standard SQL, SQL++ supports implicit FROM binding variables (i.e., aliases), for which a binding variable is generated. SQL++ variable generation falls into three cases:
The next two examples show queries that do not provide binding variables in their FROM clauses.
SELECT GleambookUsers.name, GleambookMessages.message FROM GleambookUsers, GleambookMessages WHERE GleambookMessages.authorId = GleambookUsers.id;
Returns:
[ { "name": "MargaritaStoddard", "message": " like ccast the 3G is awesome:)" }, { "name": "MargaritaStoddard", "message": " can't stand product-w the touch-screen is terrible" }, { "name": "MargaritaStoddard", "message": " can't stand acast its plan is terrible" }, { "name": "MargaritaStoddard", "message": " dislike x-phone its touch-screen is horrible" }, { "name": "MargaritaStoddard", "message": " can't stand acast the network is horrible:(" }, { "name": "IsbelDull", "message": " like product-y the plan is amazing" }, { "name": "IsbelDull", "message": " like product-z its platform is mind-blowing" } ]
SELECT GleambookUsers.name, GleambookMessages.message FROM GleambookUsers, ( SELECT VALUE GleambookMessages FROM GleambookMessages WHERE GleambookMessages.authorId = GleambookUsers.id );
Returns:
Error: "Syntax error: Need an alias for the enclosed expression:\n(select element GleambookMessages\n from GleambookMessages as GleambookMessages\n where (GleambookMessages.authorId = GleambookUsers.id)\n )", "query_from_user": "use TinySocial;\n\nSELECT GleambookUsers.name, GleambookMessages.message\n FROM GleambookUsers,\n (\n SELECT VALUE GleambookMessages\n FROM GleambookMessages\n WHERE GleambookMessages.authorId = GleambookUsers.id\n );"
The join clause in SQL++ supports both inner joins and left outer joins from standard SQL.
Using a JOIN clause, the inner join intent from the preceeding examples can also be expressed as follows:
SQL++ supports SQL’s notion of left outer join. The following query is an example:
SELECT u.name AS uname, m.message AS message FROM GleambookUsers u LEFT OUTER JOIN GleambookMessages m ON m.authorId = u.id;
Returns:
[ { "uname": "MargaritaStoddard", "message": " like ccast the 3G is awesome:)" }, { "uname": "MargaritaStoddard", "message": " can't stand product-w the touch-screen is terrible" }, { "uname": "MargaritaStoddard", "message": " can't stand acast its plan is terrible" }, { "uname": "MargaritaStoddard", "message": " dislike x-phone its touch-screen is horrible" }, { "uname": "MargaritaStoddard", "message": " can't stand acast the network is horrible:(" }, { "uname": "IsbelDull", "message": " like product-y the plan is amazing" }, { "uname": "IsbelDull", "message": " like product-z its platform is mind-blowing" }, { "uname": "EmoryUnk" } ]
For non-matching left-side tuples, SQL++ produces MISSING values for the right-side binding variables; that is why the last object in the above result doesn’t have a message field. Note that this is slightly different from standard SQL, which instead would fill in NULL values for the right-side fields. The reason for this difference is that, for non-matches in its join results, SQL++ views fields from the right-side as being “not there” (a.k.a. MISSING) instead of as being “there but unknown” (i.e., NULL).
The left-outer join query can also be expressed using LEFT OUTER UNNEST:
SELECT u.name AS uname, m.message AS message FROM GleambookUsers u LEFT OUTER UNNEST ( SELECT VALUE message FROM GleambookMessages message WHERE message.authorId = u.id ) m;
In general, in SQL++, SQL-style join queries can also be expressed by UNNEST clauses and left outer join queries can be expressed by LEFT OUTER UNNESTs.
The SQL++ GROUP BY clause generalizes standard SQL’s grouping and aggregation semantics, but it also retains backward compatibility with the standard (relational) SQL GROUP BY and aggregation features.
In a GROUP BY clause, in addition to the binding variable(s) defined for the grouping key(s), SQL++ allows a user to define a group variable by using the clause’s GROUP AS extension to denote the resulting group. After grouping, then, the query’s in-scope variables include the grouping key’s binding variables as well as this group variable which will be bound to one collection value for each group. This per-group collection (i.e., multiset) value will be a set of nested objects in which each field of the object is the result of a renamed variable defined in parentheses following the group variable’s name. The GROUP AS syntax is as follows:
<GROUP> <AS> Variable ("(" Variable <AS> VariableReference ("," Variable <AS> VariableReference )* ")")?
SELECT * FROM GleambookMessages message GROUP BY message.authorId AS uid GROUP AS msgs(message AS msg);
This first example query returns:
[ { "msgs": [ { "msg": { "senderLocation": [ 38.97, 77.49 ], "inResponseTo": 1, "messageId": 11, "authorId": 1, "message": " can't stand acast its plan is terrible" } }, { "msg": { "senderLocation": [ 41.66, 80.87 ], "inResponseTo": 4, "messageId": 2, "authorId": 1, "message": " dislike x-phone its touch-screen is horrible" } }, { "msg": { "senderLocation": [ 37.73, 97.04 ], "inResponseTo": 2, "messageId": 4, "authorId": 1, "message": " can't stand acast the network is horrible:(" } }, { "msg": { "senderLocation": [ 40.33, 80.87 ], "inResponseTo": 11, "messageId": 8, "authorId": 1, "message": " like ccast the 3G is awesome:)" } }, { "msg": { "senderLocation": [ 42.5, 70.01 ], "inResponseTo": 12, "messageId": 10, "authorId": 1, "message": " can't stand product-w the touch-screen is terrible" } } ], "uid": 1 }, { "msgs": [ { "msg": { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" } }, { "msg": { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" } } ], "uid": 2 } ]
As we can see from the above query result, each group in the example query’s output has an associated group variable value called msgs that appears in the SELECT *’s result. This variable contains a collection of objects associated with the group; each of the group’s message values appears in the msg field of the objects in the msgs collection.
The group variable in SQL++ makes more complex, composable, nested subqueries over a group possible, which is important given the more complex data model of SQL++ (relative to SQL). As a simple example of this, as we really just want the messages associated with each user, we might wish to avoid the “extra wrapping” of each message as the msg field of a object. (That wrapping is useful in more complex cases, but is essentially just in the way here.) We can use a subquery in the SELECT clase to tunnel through the extra nesting and produce the desired result.
SELECT uid, (SELECT VALUE g.msg FROM g) AS msgs FROM GleambookMessages gbm GROUP BY gbm.authorId AS uid GROUP AS g(gbm as msg);
This variant of the example query returns:
[ { "msgs": [ { "senderLocation": [ 38.97, 77.49 ], "inResponseTo": 1, "messageId": 11, "authorId": 1, "message": " can't stand acast its plan is terrible" }, { "senderLocation": [ 41.66, 80.87 ], "inResponseTo": 4, "messageId": 2, "authorId": 1, "message": " dislike x-phone its touch-screen is horrible" }, { "senderLocation": [ 37.73, 97.04 ], "inResponseTo": 2, "messageId": 4, "authorId": 1, "message": " can't stand acast the network is horrible:(" }, { "senderLocation": [ 40.33, 80.87 ], "inResponseTo": 11, "messageId": 8, "authorId": 1, "message": " like ccast the 3G is awesome:)" }, { "senderLocation": [ 42.5, 70.01 ], "inResponseTo": 12, "messageId": 10, "authorId": 1, "message": " can't stand product-w the touch-screen is terrible" } ], "uid": 1 }, { "msgs": [ { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" }, { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" } ], "uid": 2 } ]
Because this is a fairly common case, a third variant with output identical to the second variant is also possible:
SELECT uid, msg AS msgs FROM GleambookMessages gbm GROUP BY gbm.authorId AS uid GROUP AS g(gbm as msg);
This variant of the query exploits a bit of SQL-style “syntactic sugar” that SQL++ offers to shorten some user queries. In particular, in the SELECT list, the reference to the GROUP variable field msg – because it references a field of the group variable – is allowed but is “pluralized”. As a result, the msg reference in the SELECT list is implicitly rewritten into the second variant’s SELECT VALUE subquery.
The next example shows a more interesting case involving the use of a subquery in the SELECT list. Here the subquery further processes the groups.
SELECT uid, (SELECT VALUE g.msg FROM g WHERE g.msg.message LIKE '% like%' ORDER BY g.msg.messageId LIMIT 2) AS msgs FROM GleambookMessages gbm GROUP BY gbm.authorId AS uid GROUP AS g(gbm as msg);
This example query returns:
[ { "msgs": [ { "senderLocation": [ 40.33, 80.87 ], "inResponseTo": 11, "messageId": 8, "authorId": 1, "message": " like ccast the 3G is awesome:)" } ], "uid": 1 }, { "msgs": [ { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" }, { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" } ], "uid": 2 } ]
In the SQL++ syntax, providing named binding variables for GROUP BY key expressions is optional. If a grouping key is missing a user-provided binding variable, the underlying compiler will generate one. Automatic grouping key variable naming falls into three cases in SQL++, much like the treatment of unnamed projections:
The next example illustrates a query that doesn’t provide binding variables for its grouping key expressions.
SELECT authorId, (SELECT VALUE g.msg FROM g WHERE g.msg.message LIKE '% like%' ORDER BY g.msg.messageId LIMIT 2) AS msgs FROM GleambookMessages gbm GROUP BY gbm.authorId GROUP AS g(gbm as msg);
This query returns:
[ { "msgs": [ { "senderLocation": [ 40.33, 80.87 ], "inResponseTo": 11, "messageId": 8, "authorId": 1, "message": " like ccast the 3G is awesome:)" } ], "authorId": 1 }, { "msgs": [ { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" }, { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" } ], "authorId": 2 } ]
Based on the three variable generation rules, the generated variable for the grouping key expression message.authorId is authorId (which is how it is referred to in the example’s SELECT clause).
The group variable itself is also optional in SQL++’s GROUP BY syntax. If a user’s query does not declare the name and structure of the group variable using GROUP AS, the query compiler will generate a unique group variable whose fields include all of the binding variables defined in the FROM clause of the current enclosing SELECT statement. (In this case the user’s query will not be able to refer to the generated group variable.)
SELECT uid, (SELECT m.message FROM message m WHERE m.message LIKE '% like%' ORDER BY m.messageId LIMIT 2) AS msgs FROM GleambookMessages message GROUP BY message.authorId AS uid;
This query returns:
[ { "msgs": [ { "message": " like ccast the 3G is awesome:)" } ], "uid": 1 }, { "msgs": [ { "message": " like product-y the plan is amazing" }, { "message": " like product-z its platform is mind-blowing" } ], "uid": 2 } ]
Note that in the query above, in principle, message is not an in-scope variable in the SELECT clause. However, the query above is a syntactically-sugared simplification of the following query and it is thus legal, executable, and returns the same result:
SELECT uid, (SELECT g.msg.message FROM g WHERE g.msg.message LIKE '% like%' ORDER BY g.msg.messageId LIMIT 2) AS msgs FROM GleambookMessages gbm GROUP BY gbm.authorId AS uid GROUP AS g(gbm as msg);
In the traditional SQL, which doesn’t support nested data, grouping always also involves the use of aggregation to compute properties of the groups (for example, the average number of messages per user rather than the actual set of messages per user). Each aggregation function in SQL++ takes a collection (for example, the group of messages) as its input and produces a scalar value as its output. These aggregation functions, being truly functional in nature (unlike in SQL), can be used anywhere in a query where an expression is allowed. The following table catalogs the SQL++ built-in aggregation functions and also indicates how each one handles NULL/MISSING values in the input collection or a completely empty input collection:
Function | NULL | MISSING | Empty Collection |
---|---|---|---|
COLL_COUNT | counted | counted | 0 |
COLL_SUM | returns NULL | returns NULL | returns NULL |
COLL_MAX | returns NULL | returns NULL | returns NULL |
COLL_MIN | returns NULL | returns NULL | returns NULL |
COLL_AVG | returns NULL | returns NULL | returns NULL |
ARRAY_COUNT | not counted | not counted | 0 |
ARRAY_SUM | ignores NULL | ignores NULL | returns NULL |
ARRAY_MAX | ignores NULL | ignores NULL | returns NULL |
ARRAY_MIN | ignores NULL | ignores NULL | returns NULL |
ARRAY_AVG | ignores NULL | ignores NULL | returns NULL |
Notice that SQL++ has twice as many functions listed above as there are aggregate functions in SQL-92. This is because SQL++ offers two versions of each – one that handles UNKNOWN values in a semantically strict fashion, where unknown values in the input result in unknown values in the output – and one that handles them in the ad hoc “just ignore the unknown values” fashion that the SQL standard chose to adopt.
ARRAY_AVG( ( SELECT VALUE ARRAY_COUNT(friendIds) FROM GleambookUsers ) );
This example returns:
3.3333333333333335
SELECT uid AS uid, ARRAY_COUNT(grp) AS msgCnt FROM GleambookMessages message GROUP BY message.authorId AS uid GROUP AS grp(message AS msg);
This query returns:
[ { "uid": 1, "msgCnt": 5 }, { "uid": 2, "msgCnt": 2 } ]
Notice how the query forms groups where each group involves a message author and their messages. (SQL cannot do this because the grouped intermediate result is non-1NF in nature.) The query then uses the collection aggregate function ARRAY_COUNT to get the cardinality of each group of messages.
For compatibility with the traditional SQL aggregation functions, SQL++ also offers SQL-92’s aggregation function symbols (COUNT, SUM, MAX, MIN, and AVG) as supported syntactic sugar. The SQL++ compiler rewrites queries that utilize these function symbols into SQL++ queries that only use the SQL++ collection aggregate functions. The following example uses the SQL-92 syntax approach to compute a result that is identical to that of the more explicit SQL++ example above:
SELECT uid, COUNT(*) AS msgCnt FROM GleambookMessages msg GROUP BY msg.authorId AS uid;
It is important to realize that COUNT is actually not a SQL++ built-in aggregation function. Rather, the COUNT query above is using a special “sugared” function symbol that the SQL++ compiler will rewrite as follows:
SELECT uid AS uid, ARRAY_COUNT( (SELECT VALUE 1 FROM `$1` as g) ) AS msgCnt FROM GleambookMessages msg GROUP BY msg.authorId AS uid GROUP AS `$1`(msg AS msg);
The same sort of rewritings apply to the function symbols SUM, MAX, MIN, and AVG. In contrast to the SQL++ collection aggregate functions, these special SQL-92 function symbols can only be used in the same way they are in standard SQL (i.e., with the same restrictions).
SQL++ provides full support for SQL-92 GROUP BY aggregation queries. The following query is such an example:
SELECT msg.authorId, COUNT(*) FROM GleambookMessages msg GROUP BY msg.authorId;
This query outputs:
[ { "authorId": 1, "$1": 5 }, { "authorId": 2, "$1": 2 } ]
In principle, a msg reference in the query’s SELECT clause would be “sugarized” as a collection (as described in Implicit Group Variables). However, since the SELECT expression msg.authorId is syntactically identical to a GROUP BY key expression, it will be internally replaced by the generated group key variable. The following is the equivalent rewritten query that will be generated by the compiler for the query above:
SELECT authorId AS authorId, ARRAY_COUNT( (SELECT g.msg FROM `$1` AS g) ) FROM GleambookMessages msg GROUP BY msg.authorId AS authorId GROUP AS `$1`(msg AS msg);
SQL++ also allows column aliases to be used as GROUP BY keys or ORDER BY keys.
Both WHERE clauses and HAVING clauses are used to filter input data based on a condition expression. Only tuples for which the condition expression evaluates to TRUE are propagated. Note that if the condition expression evaluates to NULL or MISSING the input tuple will be disgarded.
The ORDER BY clause is used to globally sort data in either ascending order (i.e., ASC) or descending order (i.e., DESC). During ordering, MISSING and NULL are treated as being smaller than any other value if they are encountered in the ordering key(s). MISSING is treated as smaller than NULL if both occur in the data being sorted. The following example returns all GleambookUsers in descending order by their number of friends.
SELECT VALUE user FROM GleambookUsers AS user ORDER BY ARRAY_COUNT(user.friendIds) DESC;
This query returns:
[ { "userSince": "2012-08-20T10:10:00.000Z", "friendIds": [ 2, 3, 6, 10 ], "gender": "F", "name": "MargaritaStoddard", "nickname": "Mags", "alias": "Margarita", "id": 1, "employment": [ { "organizationName": "Codetechno", "start-date": "2006-08-06" }, { "end-date": "2010-01-26", "organizationName": "geomedia", "start-date": "2010-06-17" } ] }, { "userSince": "2012-07-10T10:10:00.000Z", "friendIds": [ 1, 5, 8, 9 ], "name": "EmoryUnk", "alias": "Emory", "id": 3, "employment": [ { "organizationName": "geomedia", "endDate": "2010-01-26", "startDate": "2010-06-17" } ] }, { "userSince": "2011-01-22T10:10:00.000Z", "friendIds": [ 1, 4 ], "name": "IsbelDull", "nickname": "Izzy", "alias": "Isbel", "id": 2, "employment": [ { "organizationName": "Hexviafind", "startDate": "2010-04-27" } ] } ]
The LIMIT clause is used to limit the result set to a specified constant size. The use of the LIMIT clause is illustrated in the next example.
SELECT VALUE user FROM GleambookUsers AS user ORDER BY len(user.friendIds) DESC LIMIT 1;
This query returns:
[ { "userSince": "2012-08-20T10:10:00.000Z", "friendIds": [ 2, 3, 6, 10 ], "gender": "F", "name": "MargaritaStoddard", "nickname": "Mags", "alias": "Margarita", "id": 1, "employment": [ { "organizationName": "Codetechno", "start-date": "2006-08-06" }, { "end-date": "2010-01-26", "organizationName": "geomedia", "start-date": "2010-06-17" } ] } ]
As in standard SQL, WITH clauses are available to improve the modularity of a query. The next query shows an example.
WITH avgFriendCount AS ( SELECT VALUE AVG(ARRAY_COUNT(user.friendIds)) FROM GleambookUsers AS user )[0] SELECT VALUE user FROM GleambookUsers user WHERE ARRAY_COUNT(user.friendIds) > avgFriendCount;
This query returns:
[ { "userSince": "2012-08-20T10:10:00.000Z", "friendIds": [ 2, 3, 6, 10 ], "gender": "F", "name": "MargaritaStoddard", "nickname": "Mags", "alias": "Margarita", "id": 1, "employment": [ { "organizationName": "Codetechno", "start-date": "2006-08-06" }, { "end-date": "2010-01-26", "organizationName": "geomedia", "start-date": "2010-06-17" } ] }, { "userSince": "2012-07-10T10:10:00.000Z", "friendIds": [ 1, 5, 8, 9 ], "name": "EmoryUnk", "alias": "Emory", "id": 3, "employment": [ { "organizationName": "geomedia", "endDate": "2010-01-26", "startDate": "2010-06-17" } ] } ]
The query is equivalent to the following, more complex, inlined form of the query:
SELECT * FROM GleambookUsers user WHERE ARRAY_COUNT(user.friendIds) > ( SELECT VALUE AVG(ARRAY_COUNT(user.friendIds)) FROM GleambookUsers AS user ) [0];
WITH can be particularly useful when a value needs to be used several times in a query.
Before proceeding further, notice that both the WITH query and its equivalent inlined variant include the syntax “[0]” – this is due to a noteworthy difference between SQL++ and SQL-92. In SQL-92, whenever a scalar value is expected and it is being produced by a query expression, the SQL-92 query processor will evaluate the expression, check that there is only one row and column in the result at runtime, and then coerce the one-row/one-column tabular result into a scalar value. SQL++, being designed to deal with nested data and schema-less data, does not (and should not) do this. Collection-valued data is perfectly legal in most SQL++ contexts, and its data is schema-less, so a query processor rarely knows exactly what to expect where and such automatic conversion is often not desirable. Thus, in the queries above, the use of “[0]” extracts the first (i.e., 0th) element of an array-valued query expression’s result; this is needed above, even though the result is an array of one element, to extract the only element in the singleton array and obtain the desired scalar for the comparison.
Similar to WITH clauses, LET clauses can be useful when a (complex) expression is used several times within a query, allowing it to be written once to make the query more concise. The next query shows an example.
SELECT u.name AS uname, messages AS messages FROM GleambookUsers u LET messages = (SELECT VALUE m FROM GleambookMessages m WHERE m.authorId = u.id) WHERE EXISTS messages;
This query lists GleambookUsers that have posted GleambookMessages and shows all authored messages for each listed user. It returns:
[ { "uname": "MargaritaStoddard", "messages": [ { "senderLocation": [ 38.97, 77.49 ], "inResponseTo": 1, "messageId": 11, "authorId": 1, "message": " can't stand acast its plan is terrible" }, { "senderLocation": [ 41.66, 80.87 ], "inResponseTo": 4, "messageId": 2, "authorId": 1, "message": " dislike x-phone its touch-screen is horrible" }, { "senderLocation": [ 37.73, 97.04 ], "inResponseTo": 2, "messageId": 4, "authorId": 1, "message": " can't stand acast the network is horrible:(" }, { "senderLocation": [ 40.33, 80.87 ], "inResponseTo": 11, "messageId": 8, "authorId": 1, "message": " like ccast the 3G is awesome:)" }, { "senderLocation": [ 42.5, 70.01 ], "inResponseTo": 12, "messageId": 10, "authorId": 1, "message": " can't stand product-w the touch-screen is terrible" } ] }, { "uname": "IsbelDull", "messages": [ { "senderLocation": [ 31.5, 75.56 ], "inResponseTo": 1, "messageId": 6, "authorId": 2, "message": " like product-z its platform is mind-blowing" }, { "senderLocation": [ 48.09, 81.01 ], "inResponseTo": 4, "messageId": 3, "authorId": 2, "message": " like product-y the plan is amazing" } ] } ]
This query is equivalent to the following query that does not use the LET clause:
SELECT u.name AS uname, ( SELECT VALUE m FROM GleambookMessages m WHERE m.authorId = u.id ) AS messages FROM GleambookUsers u WHERE EXISTS ( SELECT VALUE m FROM GleambookMessages m WHERE m.authorId = u.id );
UNION ALL can be used to combine two input arrays or multisets into one. As in SQL, there is no ordering guarantee on the contents of the output stream. However, unlike SQL, SQL++ does not constrain what the data looks like on the input streams; in particular, it allows heterogenity on the input and output streams. A type error will be raised if one of the inputs is not a collection. The following odd but legal query is an example:
In SQL++, an arbitrary subquery can appear anywhere that an expression can appear. Unlike SQL-92, as was just alluded to, the subqueries in a SELECT list or a boolean predicate need not return singleton, single-column relations. Instead, they may return arbitrary collections. For example, the following query is a variant of the prior group-by query examples; it retrieves an array of up to two “dislike” messages per user.
SELECT uid, (SELECT VALUE m.msg FROM msgs m WHERE m.msg.message LIKE '%dislike%' ORDER BY m.msg.messageId LIMIT 2) AS msgs FROM GleambookMessages message GROUP BY message.authorId AS uid GROUP AS msgs(message AS msg);
For our sample data set, this query returns:
[ { "msgs": [ { "senderLocation": [ 41.66, 80.87 ], "inResponseTo": 4, "messageId": 2, "authorId": 1, "message": " dislike x-phone its touch-screen is horrible" } ], "uid": 1 }, { "msgs": [ ], "uid": 2 } ]
Note that a subquery, like a top-level SELECT statment, always returns a collection – regardless of where within a query the subquery occurs – and again, its result is never automatically cast into a scalar.
SQL++ offers the following additional features beyond SQL-92 (hence the “++” in its name):
The following matrix is a quick “SQL-92 compatibility cheat sheet” for SQL++.
Feature | SQL++ | SQL-92 | Why different? |
---|---|---|---|
SELECT * | Returns nested objects | Returns flattened concatenated objects | Nested collections are 1st class citizens |
SELECT list | order not preserved | order preserved | Fields in a JSON object is not ordered |
Subquery | Returns a collection | The returned collection is cast into a scalar value if the subquery appears in a SELECT list or on one side of a comparison or as input to a function | Nested collections are 1st class citizens |
LEFT OUTER JOIN | Fills in MISSING(s) for non-matches | Fills in NULL(s) for non-matches | “Absence” is more appropriate than “unknown” here. |
UNION ALL | Allows heterogeneous inputs and output | Input streams must be UNION-compatible and output field names are drawn from the first input stream | Heterogenity and nested collections are common |
IN constant_expr | The constant expression has to be an array or multiset, i.e., [..,..,…] | The constant collection can be represented as comma-separated items in a paren pair | Nested collections are 1st class citizens |
String literal | Double quotes or single quotes | Single quotes only | Double quoted strings are pervasive |
Delimited identifiers | Backticks | Double quotes | Double quoted strings are pervasive |
The following SQL-92 features are not implemented yet. However, SQL++ does not conflict those features:
A SQL++ query can potentially result in one of the following errors:
If the query processor runs into any error, it will terminate the ongoing processing of the query and immediately return an error message to the client.
An valid SQL++ query must satisfy the SQL++ grammar rules. Otherwise, a syntax error will be raised.
Referring an undefined identifier can cause an error if the identifier cannot be successfully resolved as a valid field access.
SELECT * FROM GleambookUser user;
Assume we have a typo in “GleambookUser” which misses the ending “s”, we will get an identifier resolution error as follows:
Error: Cannot find dataset GleambookUser in dataverse Default nor an alias with name GleambookUser!
SELECT name, message FROM GleambookUsers u JOIN GleambookMessages m ON m.authorId = u.id;
If the compiler cannot figure out all possible fields in GleambookUsers and GleambookMessages, we will get an identifier resolution error as follows:
Error: Cannot resolve ambiguous alias reference for undefined identifier name
The SQL++ compiler does type checks based on its available type information. In addition, the SQL++ runtime also reports type errors if a data model instance it processes does not satisfy the type requirement.
A query can potentially exhaust system resources, such as the number of open files and disk spaces. For instance, the following two resource errors could be potentially be seen when running the system:
Error: no space left on device Error: too many open files
The “no space left on device” issue usually can be fixed by cleaning up disk spaces and reserving more disk spaces for the system. The “too many open files” issue usually can be fixed by a system administrator, following the instructions here.
Statement ::= ( SingleStatement ( ";" )? )* <EOF> SingleStatement ::= DatabaseDeclaration | FunctionDeclaration | CreateStatement | DropStatement | LoadStatement | SetStatement | InsertStatement | DeleteStatement | Query ";"
In addition to queries, an implementation of SQL++ needs to support statements for data definition and manipulation purposes as well as controlling the context to be used in evaluating SQL++ expressions. This section details the DDL and DML statements supported in the SQL++ language as realized today in Apache AsterixDB.
DatabaseDeclaration ::= "USE" Identifier
At the uppermost level, the world of data is organized into data namespaces called dataverses. To set the default dataverse for a series of statements, the USE statement is provided in SQL++.
As an example, the following statement sets the default dataverse to be “TinySocial”.
USE TinySocial;
When writing a complex SQL++ query, it can sometimes be helpful to define one or more auxilliary functions that each address a sub-piece of the overall query. The declare function statement supports the creation of such helper functions. In general, the function body (expression) can be any legal SQL++ query expression.
FunctionDeclaration ::= "DECLARE" "FUNCTION" Identifier ParameterList "{" Expression "}" ParameterList ::= "(" ( <VARIABLE> ( "," <VARIABLE> )* )? ")"
The following is a simple example of a temporary SQL++ function definition and its use.
CreateStatement ::= "CREATE" ( DatabaseSpecification | TypeSpecification | DatasetSpecification | IndexSpecification | FunctionSpecification ) QualifiedName ::= Identifier ( "." Identifier )? DoubleQualifiedName ::= Identifier "." Identifier ( "." Identifier )?
The CREATE statement in SQL++ is used for creating dataverses as well as other persistent artifacts in a dataverse. It can be used to create new dataverses, datatypes, datasets, indexes, and user-defined SQL++ functions.
DatabaseSpecification ::= "DATAVERSE" Identifier IfNotExists
The CREATE DATAVERSE statement is used to create new dataverses. To ease the authoring of reusable SQL++ scripts, an optional IF NOT EXISTS clause is included to allow creation to be requested either unconditionally or only if the dataverse does not already exist. If this clause is absent, an error is returned if a dataverse with the indicated name already exists.
The following example creates a new dataverse named TinySocial if one does not already exist.
TypeSpecification ::= "TYPE" FunctionOrTypeName IfNotExists "AS" ObjectTypeDef FunctionOrTypeName ::= QualifiedName IfNotExists ::= ( <IF> <NOT> <EXISTS> )? TypeExpr ::= ObjectTypeDef | TypeReference | ArrayTypeDef | MultisetTypeDef ObjectTypeDef ::= ( <CLOSED> | <OPEN> )? "{" ( ObjectField ( "," ObjectField )* )? "}" ObjectField ::= Identifier ":" ( TypeExpr ) ( "?" )? NestedField ::= Identifier ( "." Identifier )* IndexField ::= NestedField ( ":" TypeReference )? TypeReference ::= Identifier ArrayTypeDef ::= "[" ( TypeExpr ) "]" MultisetTypeDef ::= "{{" ( TypeExpr ) "}}"
The CREATE TYPE statement is used to create a new named datatype. This type can then be used to create stored collections or utilized when defining one or more other datatypes. Much more information about the data model is available in the data model reference guide. A new type can be a object type, a renaming of another type, an array type, or a multiset type. A object type can be defined as being either open or closed. Instances of a closed object type are not permitted to contain fields other than those specified in the create type statement. Instances of an open object type may carry additional fields, and open is the default for new types if neither option is specified.
The following example creates a new object type called GleambookUser type. Since it is defined as (defaulting to) being an open type, instances will be permitted to contain more than what is specified in the type definition. The first four fields are essentially traditional typed name/value pairs (much like SQL fields). The friendIds field is a multiset of integers. The employment field is an array of instances of another named object type, EmploymentType.
CREATE TYPE GleambookUserType AS { id: int, alias: string, name: string, userSince: datetime, friendIds: {{ int }}, employment: [ EmploymentType ] };
The next example creates a new object type, closed this time, called MyUserTupleType. Instances of this closed type will not be permitted to have extra fields, although the alias field is marked as optional and may thus be NULL or MISSING in legal instances of the type. Note that the type of the id field in the example is UUID. This field type can be used if you want to have this field be an autogenerated-PK field. (Refer to the Datasets section later for more details on such fields.)
DatasetSpecification ::= ( <INTERNAL> )? <DATASET> QualifiedName "(" QualifiedName ")" IfNotExists PrimaryKey ( <ON> Identifier )? ( <HINTS> Properties )? ( "USING" "COMPACTION" "POLICY" CompactionPolicy ( Configuration )? )? ( <WITH> <FILTER> <ON> Identifier )? | <EXTERNAL> <DATASET> QualifiedName "(" QualifiedName ")" IfNotExists <USING> AdapterName Configuration ( <HINTS> Properties )? ( <USING> <COMPACTION> <POLICY> CompactionPolicy ( Configuration )? )? AdapterName ::= Identifier Configuration ::= "(" ( KeyValuePair ( "," KeyValuePair )* )? ")" KeyValuePair ::= "(" StringLiteral "=" StringLiteral ")" Properties ::= ( "(" Property ( "," Property )* ")" )? Property ::= Identifier "=" ( StringLiteral | IntegerLiteral ) FunctionSignature ::= FunctionOrTypeName "@" IntegerLiteral PrimaryKey ::= <PRIMARY> <KEY> NestedField ( "," NestedField )* ( <AUTOGENERATED> )? CompactionPolicy ::= Identifier
The CREATE DATASET statement is used to create a new dataset. Datasets are named, multisets of object type instances; they are where data lives persistently and are the usual targets for SQL++ queries. Datasets are typed, and the system ensures that their contents conform to their type definitions. An Internal dataset (the default kind) is a dataset whose content lives within and is managed by the system. It is required to have a specified unique primary key field which uniquely identifies the contained objects. (The primary key is also used in secondary indexes to identify the indexed primary data objects.)
Internal datasets contain several advanced options that can be specified when appropriate. One such option is that random primary key (UUID) values can be auto-generated by declaring the field to be UUID and putting “AUTOGENERATED” after the “PRIMARY KEY” identifier. In this case, unlike other non-optional fields, a value for the auto-generated PK field should not be provided at insertion time by the user since each object’s primary key field value will be auto-generated by the system.
Another advanced option, when creating an Internal dataset, is to specify the merge policy to control which of the underlying LSM storage components to be merged. (The system supports Log-Structured Merge tree based physical storage for Internal datasets.) Currently the system supports four different component merging policies that can be chosen per dataset: no-merge, constant, prefix, and correlated-prefix. The no-merge policy simply never merges disk components. The constant policy merges disk components when the number of components reaches a constant number k that can be configured by the user. The prefix policy relies on both component sizes and the number of components to decide which components to merge. It works by first trying to identify the smallest ordered (oldest to newest) sequence of components such that the sequence does not contain a single component that exceeds some threshold size M and that either the sum of the component’s sizes exceeds M or the number of components in the sequence exceeds another threshold C. If such a sequence exists, the components in the sequence are merged together to form a single component. Finally, the correlated-prefix policy is similar to the prefix policy, but it delegates the decision of merging the disk components of all the indexes in a dataset to the primary index. When the correlated-prefix policy decides that the primary index needs to be merged (using the same decision criteria as for the prefix policy), then it will issue successive merge requests on behalf of all other indexes associated with the same dataset. The system’s default policy is the prefix policy except when there is a filter on a dataset, where the preferred policy for filters is the correlated-prefix.
Another advanced option shown in the syntax above, related to performance and mentioned above, is that a filter can optionally be created on a field to further optimize range queries with predicates on the filter’s field. Filters allow some range queries to avoid searching all LSM components when the query conditions match the filter. (Refer to Filter-Based LSM Index Acceleration for more information about filters.)
An External dataset, in contrast to an Internal dataset, has data stored outside of the system’s control. Files living in HDFS or in the local filesystem(s) of a cluster’s nodes are currently supported. External dataset support allows SQL++ queries to treat foreign data as though it were stored in the system, making it possible to query “legacy” file data (for example, Hive data) without having to physically import it. When defining an External dataset, an appropriate adapter type must be selected for the desired external data. (See the Guide to External Data for more information on the available adapters.)
The following example creates an Internal dataset for storing FacefookUserType objects. It specifies that their id field is their primary key.
CREATE INTERNAL DATASET GleambookUsers(GleambookUserType) PRIMARY KEY id;
The next example creates another Internal dataset (the default kind when no dataset kind is specified) for storing MyUserTupleType objects. It specifies that the id field should be used as the primary key for the dataset. It also specifies that the id field is an auto-generated field, meaning that a randomly generated UUID value should be assigned to each incoming object by the system. (A user should therefore not attempt to provide a value for this field.) Note that the id field’s declared type must be UUID in this case.
CREATE DATASET MyUsers(MyUserTupleType) PRIMARY KEY id AUTOGENERATED;
The next example creates an External dataset for querying LineItemType objects. The choice of the hdfs adapter means that this dataset’s data actually resides in HDFS. The example CREATE statement also provides parameters used by the hdfs adapter: the URL and path needed to locate the data in HDFS and a description of the data format.
CREATE EXTERNAL DATASET LineItem(LineItemType) USING hdfs ( ("hdfs"="hdfs://HOST:PORT"), ("path"="HDFS_PATH"), ("input-format"="text-input-format"), ("format"="delimited-text"), ("delimiter"="|"));
IndexSpecification ::= <INDEX> Identifier IfNotExists <ON> QualifiedName "(" ( IndexField ) ( "," IndexField )* ")" ( "type" IndexType "?")? ( <ENFORCED> )? IndexType ::= <BTREE> | <RTREE> | <KEYWORD> | <NGRAM> "(" IntegerLiteral ")"
The CREATE INDEX statement creates a secondary index on one or more fields of a specified dataset. Supported index types include BTREE for totally ordered datatypes, RTREE for spatial data, and KEYWORD and NGRAM for textual (string) data. An index can be created on a nested field (or fields) by providing a valid path expression as an index field identifier.
An indexed field is not required to be part of the datatype associated with a dataset if the dataset’s datatype is declared as open and if the field’s type is provided along with its name and if the ENFORCED keyword is specified at the end of the index definition. ENFORCING an open field introduces a check that makes sure that the actual type of the indexed field (if the optional field exists in the object) always matches this specified (open) field type.
The following example creates a btree index called gbAuthorIdx on the authorId field of the GleambookMessages dataset. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the author-id field.
CREATE INDEX gbAuthorIdx ON GleambookMessages(authorId) TYPE BTREE;
The following example creates an open btree index called gbSendTimeIdx on the (non-predeclared) sendTime field of the GleambookMessages dataset having datetime type. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the sendTime field.
CREATE INDEX gbSendTimeIdx ON GleambookMessages(sendTime: datetime?) TYPE BTREE ENFORCED;
The following example creates a btree index called crpUserScrNameIdx on screenName, a nested field residing within a object-valued user field in the ChirpMessages dataset. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the nested screenName field. Such nested fields must be singular, i.e., one cannot index through (or on) an array-valued field.
CREATE INDEX crpUserScrNameIdx ON ChirpMessages(user.screenName) TYPE BTREE;
The following example creates an rtree index called gbSenderLocIdx on the sender-location field of the GleambookMessages dataset. This index can be useful for accelerating queries that use the spatial-intersect function in a predicate involving the sender-location field.
CREATE INDEX gbSenderLocIndex ON GleambookMessages("sender-location") TYPE RTREE;
The following example creates a 3-gram index called fbUserIdx on the name field of the GleambookUsers dataset. This index can be used to accelerate some similarity or substring maching queries on the name field. For details refer to the document on similarity queries.
CREATE INDEX fbUserIdx ON GleambookUsers(name) TYPE NGRAM(3);
The following example creates a keyword index called fbMessageIdx on the message field of the GleambookMessages dataset. This keyword index can be used to optimize queries with token-based similarity predicates on the message field. For details refer to the document on similarity queries.
The create function statement creates a named function that can then be used and reused in SQL++ queries. The body of a function can be any SQL++ expression involving the function’s parameters.
FunctionSpecification ::= "FUNCTION" FunctionOrTypeName IfNotExists ParameterList "{" Expression "}"
The following is an example of a CREATE FUNCTION statement which is similar to our earlier DECLARE FUNCTION example. It differs from that example in that it results in a function that is persistently registered by name in the specified dataverse (the current dataverse being used, if not otherwise specified).
DropStatement ::= "DROP" ( "DATAVERSE" Identifier IfExists | "TYPE" FunctionOrTypeName IfExists | "DATASET" QualifiedName IfExists | "INDEX" DoubleQualifiedName IfExists | "FUNCTION" FunctionSignature IfExists ) IfExists ::= ( "IF" "EXISTS" )?
The DROP statement in SQL++ is the inverse of the CREATE statement. It can be used to drop dataverses, datatypes, datasets, indexes, and functions.
The following examples illustrate some uses of the DROP statement.
DROP DATASET GleambookUsers IF EXISTS; DROP INDEX GleambookMessages.gbSenderLocIndex; DROP TYPE TinySocial2.GleambookUserType; DROP FUNCTION friendInfo@1; DROP DATAVERSE TinySocial;
When an artifact is dropped, it will be droppped from the current dataverse if none is specified (see the DROP DATASET example above) or from the specified dataverse (see the DROP TYPE example above) if one is specified by fully qualifying the artifact name in the DROP statement. When specifying an index to drop, the index name must be qualified by the dataset that it indexes. When specifying a function to drop, since SQL++ allows functions to be overloaded by their number of arguments, the identifying name of the function to be dropped must explicitly include that information. (friendInfo@1 above denotes the 1-argument function named friendInfo in the current dataverse.)
LoadStatement ::= <LOAD> <DATASET> QualifiedName <USING> AdapterName Configuration ( <PRE-SORTED> )?
The LOAD statement is used to initially populate a dataset via bulk loading of data from an external file. An appropriate adapter must be selected to handle the nature of the desired external data. The LOAD statement accepts the same adapters and the same parameters as discussed earlier for External datasets. (See the guide to external data for more information on the available adapters.) If a dataset has an auto-generated primary key field, the file to be imported should not include that field in it.
The following example shows how to bulk load the GleambookUsers dataset from an external file containing data that has been prepared in ADM (Asterix Data Model) format.
InsertStatement ::= <INSERT> <INTO> QualifiedName Query
The SQL++ INSERT statement is used to insert new data into a dataset. The data to be inserted comes from a SQL++ query expression. This expression can be as simple as a constant expression, or in general it can be any legal SQL++ query. If the target dataset has an auto-generated primary key field, the insert statement should not include a value for that field in it. (The system will automatically extend the provided object with this additional field and a corresponding value.) Insertion will fail if the dataset already has data with the primary key value(s) being inserted.
Inserts are processed transactionally by the system. The transactional scope of each insert transaction is the insertion of a single object plus its affiliated secondary index entries (if any). If the query part of an insert returns a single object, then the INSERT statement will be a single, atomic transaction. If the query part returns multiple objects, each object being inserted will be treated as a separate tranaction. The following example illustrates a query-based insertion.
UpsertStatement ::= <UPSERT> <INTO> QualifiedName Query
The SQL++ UPSERT statement syntactically mirrors the INSERT statement discussed above. The difference lies in its semantics, which for UPSERT are “add or replace” instead of the INSERT “add if not present, else error” semantics. Whereas an INSERT can fail if another object already exists with the specified key, the analogous UPSERT will replace the previous object’s value with that of the new object in such cases.
The following example illustrates a query-based upsert operation.
DeleteStatement ::= <DELETE> <FROM> QualifiedName ( ( <AS> )? Variable )? ( <WHERE> Expression )?
The SQL++ DELETE statement is used to delete data from a target dataset. The data to be deleted is identified by a boolean expression involving the variable bound to the target dataset in the DELETE statement.
Deletes are processed transactionally by the system. The transactional scope of each delete transaction is the deletion of a single object plus its affiliated secondary index entries (if any). If the boolean expression for a delete identifies a single object, then the DELETE statement itself will be a single, atomic transaction. If the expression identifies multiple objects, then each object deleted will be handled as a separate transaction.
The following examples illustrate single-object deletions.
DELETE FROM GleambookUsers WHERE id = 5;
All reserved keywords are listed in the following table:
AND | ANY | APPLY | AS | ASC | AT |
AUTOGENERATED | BETWEEN | BTREE | BY | CASE | CLOSED |
CREATE | COMPACTION | COMPACT | CONNECT | CORRELATE | DATASET |
COLLECTION | DATAVERSE | DECLARE | DEFINITION | DECLARE | DEFINITION |
DELETE | DESC | DISCONNECT | DISTINCT | DROP | ELEMENT |
ELEMENT | EXPLAIN | ELSE | ENFORCED | END | EVERY |
EXCEPT | EXIST | EXTERNAL | FEED | FILTER | FLATTEN |
FOR | FROM | FULL | FUNCTION | GROUP | HAVING |
HINTS | IF | INTO | IN | INDEX | INGESTION |
INNER | INSERT | INTERNAL | INTERSECT | IS | JOIN |
KEYWORD | LEFT | LETTING | LET | LIKE | LIMIT |
LOAD | NODEGROUP | NGRAM | NOT | OFFSET | ON |
OPEN | OR | ORDER | OUTER | OUTPUT | PATH |
POLICY | PRE-SORTED | PRIMARY | RAW | REFRESH | RETURN |
RTREE | RUN | SATISFIES | SECONDARY | SELECT | SET |
SOME | TEMPORARY | THEN | TYPE | UNKNOWN | UNNEST |
UPDATE | USE | USING | VALUE | WHEN | WHERE |
WITH | WRITE |
The SET statement can be used to override some cluster-wide configuration parameters for a specific request:
SET <IDENTIFIER> <STRING_LITERAL>
As parameter identifiers are qualified names (containing a ‘.’) they have to be escaped using backticks (``). Note that changing query parameters will not affect query correctness but only impact performance characteristics, such as response time and throughput.
The system can execute each request using multiple cores on multiple machines (a.k.a., partitioned parallelism) in a cluster. A user can manually specify the maximum execution parallelism for a request to scale it up and down using the following parameter:
compiler.parallelism: the maximum number of CPU cores can be used to process a query. There are three cases of the value p for compiler.parallelism:
In the system, each blocking runtime operator such as join, group-by and order-by works within a fixed memory budget, and can gracefully spill to disks if the memory budget is smaller than the amount of data they have to hold. A user can manually configure the memory budget of those operators within a query. The supported configurable memory parameters are:
compiler.groupmemory: the memory budget that each parallel group-by operator instance can use; 32MB is the default budget.
compiler.sortmemory: the memory budget that each parallel sort operator instance can use; 32MB is the default budget.
compiler.joinmemory: the memory budget that each parallel hash join operator instance can use; 32MB is the default budget.
For each memory budget value, you can use a 64-bit integer value with a 1024-based binary unit suffix (for example, B, KB, MB, GB). If there is no user-provided suffix, “B” is the default suffix. See the following examples.