A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].where(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.where(:author => 'david') old_posts = posts.where('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.where('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
For more information, see the “Dataset Basics” guide
These methods all return modified copies of the receiver.
The dataset options that require the removal of cached columns if changed.
These symbols have _join methods created (e.g. inner_join) that call #join_table with the symbol, passing along the arguments and block from the method call.
Hash of extension name symbols to callable objects to load the extension into the Dataset object (usually by extending it with a module defined in the extension).
All methods that return modified datasets with a joined table added.
Which options don’t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table.
Methods that return modified datasets
From types allowed to be considered a simple_select_all
These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to #join_table, and they raise an error if called with a block.
Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should either be a Module that the dataset is extended with, or a callable object called with the database object. If mod is not provided, a block can be provided and is treated as the mod object.
If mod is a module, this also registers a Database extension that will extend all of the database’s datasets.
# File lib/sequel/dataset/query.rb, line 56 def self.register_extension(ext, mod=nil, &block) if mod raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block if mod.is_a?(Module) block = proc{|ds| ds.extend(mod)} Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} else block = mod end end Sequel.synchronize{EXTENSIONS[ext] = block} end
Save original clone implementation, as some other methods need to call it internally.
Alias for where.
# File lib/sequel/dataset/query.rb, line 70 def and(*cond, &block) where(*cond, &block) end
Returns a new clone of the dataset with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb, line 90 def clone(opts = OPTS) c = super(:freeze=>false) c.opts.merge!(opts) unless opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} c.clear_columns_cache end c.freeze if frozen? # SEQUEL5: Remove if frozen? c end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove
duplicate rows from the output. If arguments are provided, uses a DISTINCT ON
clause, in which case it will only be distinct on those columns, instead of
all returned columns. If a block is given, it is treated as a virtual row
block, similar to where
. Raises an error if arguments are
given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id
# File lib/sequel/dataset/query.rb, line 123 def distinct(*args, &block) virtual_row_columns(args, block) raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? clone(:distinct => args.freeze) end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound
dataset returns all rows in the current dataset that are not in the given
dataset. Raises an InvalidOperation
if the operation is not
supported. Options:
Use the given value as the #from_self alias
Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].except(DB[:other_items]) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 DB[:items].except(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 146 def except(dataset, opts=OPTS) raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:except, dataset, opts) end
Performs the inverse of #where. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(:category => 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(:category => 'software', :id=>3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
Also note that SQL uses 3-valued boolean logic
(true
, false
, NULL
), so the inverse
of a true condition is a false condition, and will still not match rows
that were NULL originally. If you take the
earlier example:
DB[:items].exclude(:category => 'software') # SELECT * FROM items WHERE (category != 'software')
Note that this does not match rows where category
is
NULL
. This is because NULL
is an unknown value,
and you do not know whether or not the NULL
category is
software
. You can explicitly specify how to handle
NULL
values if you want:
DB[:items].exclude(Sequel.~(:category=>nil) & {:category => 'software'}) # SELECT * FROM items WHERE ((category IS NULL) OR (category != 'software'))
# File lib/sequel/dataset/query.rb, line 176 def exclude(*cond, &block) add_filter(:where, cond, true, &block) end
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2} # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
# File lib/sequel/dataset/query.rb, line 187 def exclude_having(*cond, &block) add_filter(:having, cond, true, &block) end
Alias for exclude.
# File lib/sequel/dataset/query.rb, line 192 def exclude_where(*cond, &block) exclude(*cond, &block) end
Return a clone of the dataset loaded with the given dataset extensions. If no related extension file exists or the extension does not have specific support for Dataset objects, an Error will be raised.
# File lib/sequel/dataset/query.rb, line 200 def extension(*a) c = _clone(:freeze=>false) c.send(:_extension!, a) c.freeze if frozen? # SEQUEL5: Remove if frozen? c end
Alias for where.
# File lib/sequel/dataset/query.rb, line 217 def filter(*cond, &block) where(*cond, &block) end
Returns a cloned dataset with a :update lock style.
DB[:table].for_update # SELECT * FROM table FOR UPDATE
# File lib/sequel/dataset/query.rb, line 224 def for_update cached_dataset(:_for_update_ds){lock_style(:update)} end
Returns a copy of the dataset with the source changed. If no source is
given, removes all tables. If multiple sources are given, it is the same
as using a CROSS JOIN (cartesian product) between all tables. If a block is
given, it is treated as a virtual row block, similar to where
.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)
# File lib/sequel/dataset/query.rb, line 237 def from(*source, &block) virtual_row_columns(source, block) table_alias_num = 0 ctes = nil source.map! do |s| case s when Dataset if hoist_cte?(s) ctes ||= [] ctes += s.opts[:with] s = s.clone(:with=>nil) end SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) when Symbol sch, table, aliaz = split_symbol(s) if aliaz s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) SQL::AliasedExpression.new(s, aliaz.to_sym) else s end else s end end o = {:from=>source.empty? ? nil : source.freeze} o[:with] = ((opts[:with] || EMPTY_ARRAY) + ctes).freeze if ctes o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 clone(o) end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the alias of the result.
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(:alias=>:foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo ds.from_self(:alias=>:foo, :column_aliases=>[:c1, :c2]) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
# File lib/sequel/dataset/query.rb, line 282 def from_self(opts=OPTS) fs = {} non_sql = non_sql_options @opts.keys.each{|k| fs[k] = nil unless non_sql.include?(k)} c = clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self) if cols = _columns c.send(:columns=, cols) end c end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions (which are only supported on MySQL and PostgreSQL). Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
All columns must be matched to any of the given patterns.
All patterns must match at least one of the columns.
Use a case insensitive pattern match (the default is case sensitive if the database supports it).
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\') dataset.grep([:a, :b], %w%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\') # OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\')) dataset.grep([:a, :b], %w%foo% %bar%', :all_patterns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\')) # AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w%foo% %bar%', :all_columns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\')) # AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w%foo% %bar%', :all_patterns=>true, :all_columns=>true) # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\') # AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
# File lib/sequel/dataset/query.rb, line 328 def grep(columns, patterns, opts=OPTS) if opts[:all_patterns] conds = Array(patterns).map do |pat| SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds)) else conds = Array(columns).map do |c| SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds)) end end
Returns a copy of the dataset with the results grouped by the value of the
given columns. If a block is given, it is treated as a virtual row block,
similar to where
.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
# File lib/sequel/dataset/query.rb, line 349 def group(*columns, &block) virtual_row_columns(columns, block) clone(:group => (columns.compact.empty? ? nil : columns.freeze)) end
Returns a dataset grouped by the given column with count by group. Column
aliases may be supplied, and will be included in the select clause. If a
block is given, it is treated as a virtual row block, similar to
where
.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(:first_name___name).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count{substr(first_name, 1, 1).as(initial)}.all # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # => [{:initial=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb, line 380 def group_and_count(*columns, &block) select_group(*columns, &block).select_append(COUNT_OF_ALL_AS_COUNT) end
Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.
DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b
# File lib/sequel/dataset/query.rb, line 390 def group_append(*columns, &block) columns = @opts[:group] + columns if @opts[:group] group(*columns, &block) end
Alias of group
# File lib/sequel/dataset/query.rb, line 355 def group_by(*columns, &block) group(*columns, &block) end
Adds the appropriate CUBE syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 396 def group_cube raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? clone(:group_options=>:cube) end
Adds the appropriate ROLLUP syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 402 def group_rollup raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? clone(:group_options=>:rollup) end
Adds the appropriate GROUPING SETS syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 408 def grouping_sets raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets? clone(:group_options=>:"grouping sets") end
Adds an INTERSECT clause using a second dataset object. An INTERSECT
compound dataset returns all rows in both the current dataset and the given
dataset. Raises an InvalidOperation
if the operation is not
supported. Options:
Use the given value as the #from_self alias
Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 438 def intersect(dataset, opts=OPTS) raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:intersect, dataset, opts) end
Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.
DB[:items].where(:category => 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].where(:category => 'software', :id=>3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
# File lib/sequel/dataset/query.rb, line 455 def invert cached_dataset(:_invert_ds) do having, where = @opts.values_at(:having, :where) if having.nil? && where.nil? where(false) else o = {} o[:having] = SQL::BooleanExpression.invert(having) if having o[:where] = SQL::BooleanExpression.invert(where) if where clone(o) end end end
Alias of inner_join
# File lib/sequel/dataset/query.rb, line 470 def join(*args, &block) inner_join(*args, &block) end
Returns a joined dataset. Not usually called directly, users should use
the appropriate join method (e.g. join, left_join, natural_join,
cross_join) which fills in the type
argument.
Takes the following arguments:
The type of join to do (e.g. :inner)
table to join into the current dataset. Generally one of the following types:
identifier used as table or view name
a subselect is performed with an alias of tN for some value of N
set returning function
already aliased expression. Uses given alias unless overridden by the :table_alias option.
conditions used when joining, depends on type:
Assumes key (1st arg) is column of joined table (unless already qualified), and value (2nd arg) is column of the last joined or primary table (or the :implicit_qualifier option). To specify multiple conditions on a single joined table column, you must use an array. Uses a JOIN with an ON clause.
If all members of the array are symbols, considers them as columns and uses a JOIN with a USING clause. Most databases will remove duplicate columns from the result set if this is used.
If a block is not given, doesn’t use ON or USING, so the JOIN should be a NATURAL or CROSS join. If a block is given, uses an ON clause based on the block, see below.
Treats the argument as a filter expression, so strings are considered literal, symbols specify boolean columns, and Sequel expressions can be used. Uses a JOIN with an ON clause.
a hash of options, with the following keys supported:
Override the table alias used when joining. In general you shouldn’t use this option, you should provide the appropriate SQL::AliasedExpression as the table argument.
The name to use for qualifying implicit conditions. By default, the last joined or primary table is used.
Can set to false to ignore this join when future joins determine qualifier for implicit conditions.
Can be set to false to not do any implicit qualification. Can be set to :deep to use the Qualifier AST Transformer, which will attempt to qualify subexpressions of the expression tree. Can be set to :symbol to only qualify symbols. Defaults to the value of default_join_table_qualification.
The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the
table alias/name for the table currently being joined, the table alias/name
for the last joined (or first table), and an array of previous SQL::JoinClause. Unlike where
,
this block is not treated as a virtual row block.
Examples:
DB[:a].join_table(:cross, :b) # SELECT * FROM a CROSS JOIN b DB[:a].join_table(:inner, DB[:b], :c=>d) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) DB[:a].join_table(:left, :b___c, [:d]) # SELECT * FROM a LEFT JOIN b AS c USING (d) DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} end # SELECT * FROM a NATURAL JOIN b INNER JOIN c # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
# File lib/sequel/dataset/query.rb, line 533 def join_table(type, table, expr=nil, options=OPTS, &block) if hoist_cte?(table) s, ds = hoist_cte(table) return s.join_table(type, ds, expr, options, &block) end using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} if using_join && !supports_join_using? h = {} expr.each{|e| h[e] = e} return join_table(type, table, h, options) end table_alias = options[:table_alias] if table.is_a?(SQL::AliasedExpression) table_expr = if table_alias SQL::AliasedExpression.new(table.expression, table_alias, table.columns) else table end table = table_expr.expression table_name = table_alias = table_expr.alias elsif table.is_a?(Dataset) if table_alias.nil? table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 table_alias = dataset_alias(table_alias_num) end table_name = table_alias table_expr = SQL::AliasedExpression.new(table, table_alias) else table, implicit_table_alias = split_alias(table) table_alias ||= implicit_table_alias table_name = table_alias || table table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table end join = if expr.nil? and !block SQL::JoinClause.new(type, table_expr) elsif using_join raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block SQL::JoinUsingClause.new(expr, type, table_expr) else last_alias = options[:implicit_qualifier] || @opts[:last_joined_table] || first_source_alias qualify_type = options[:qualify] if Sequel.condition_specifier?(expr) expr = expr.collect do |k, v| qualify_type = default_join_table_qualification if qualify_type.nil? case qualify_type when false nil # Do no qualification when :deep k = Sequel::Qualifier.new(table_name).transform(k) v = Sequel::Qualifier.new(last_alias).transform(v) else k = qualified_column_name(k, table_name) if k.is_a?(Symbol) v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) end [k,v] end expr = SQL::BooleanExpression.from_value_pairs(expr) end if block expr2 = yield(table_name, last_alias, @opts[:join] || EMPTY_ARRAY) expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 end SQL::JoinOnClause.new(expr, type, table_expr) end opts = {:join => ((@opts[:join] || EMPTY_ARRAY) + [join]).freeze} opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false opts[:num_dataset_sources] = table_alias_num if table_alias_num clone(opts) end
Marks this dataset as a lateral dataset. If used in another dataset’s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:
DB.from(:a, DB[:b].where(:a__c=>:b__d).lateral) # SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))
# File lib/sequel/dataset/query.rb, line 627 def lateral clone(:lateral=>true) end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb, line 641 def limit(l, o = (no_offset = true; nil)) return from_self.limit(l, o) if @opts[:sql] if l.is_a?(Range) no_offset = false o = l.first l = l.last - l.first + (l.exclude_end? ? 0 : 1) end l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) if l.is_a?(Integer) raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 end ds = clone(:limit=>l) ds = ds.offset(o) unless no_offset ds end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL injection.
A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. #for_update).
DB[:items].lock_style('FOR SHARE NOWAIT') # SELECT * FROM items FOR SHARE NOWAIT DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED') # SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED
# File lib/sequel/dataset/query.rb, line 671 def lock_style(style) clone(:lock => style) end
Returns a cloned dataset without a row_proc.
ds = DB[:items] ds.row_proc = proc(&:invert) ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb, line 681 def naked cached_dataset(:_naked_ds){with_row_proc(nil)} end
Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override override the offset if you’ve called offset first.
DB[:items].offset(10) # SELECT * FROM items OFFSET 10
# File lib/sequel/dataset/query.rb, line 690 def offset(o) o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) if o.is_a?(Integer) raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 end clone(:offset => o) end
Adds an alternate filter to an existing WHERE clause using OR. If there is no WHERE clause, then the default is WHERE true, and OR would be redundant, so return an unmodified clone of the dataset in that case.
DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b DB[:items].or(:b) # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 704 def or(*cond, &block) if @opts[:where].nil? clone else add_filter(:where, cond, false, :OR, &block) end end
Returns a copy of the dataset with the order changed. If the dataset has an
existing order, it is ignored and overwritten with this order. If a nil is
given the returned dataset has no order. This can accept multiple arguments
of varying kinds, such as SQL functions. If a block
is given, it is treated as a virtual row block, similar to
where
.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b DB[:items].order(:a + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC DB[:items].order(Sequel.asc(:name, :nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 726 def order(*columns, &block) virtual_row_columns(columns, block) clone(:order => (columns.compact.empty?) ? nil : columns.freeze) end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_append(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb, line 736 def order_append(*columns, &block) columns = @opts[:order] + columns if @opts[:order] order(*columns, &block) end
Alias of order
# File lib/sequel/dataset/query.rb, line 742 def order_by(*columns, &block) order(*columns, &block) end
Alias of order_append.
# File lib/sequel/dataset/query.rb, line 747 def order_more(*columns, &block) order_append(*columns, &block) end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb, line 756 def order_prepend(*columns, &block) ds = order(*columns, &block) @opts[:order] ? ds.order_append(*@opts[:order]) : ds end
Qualify to the given table, or first source if no table is given.
DB[:items].where(:id=>1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].where(:id=>1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 768 def qualify(table=first_source) o = @opts return clone if o[:sql] # SEQUEL5: return self h = {} (o.keys & QUALIFY_KEYS).each do |k| h[k] = qualified_expression(o[k], table) end h[:select] = [SQL::ColumnAll.new(table)].freeze if !o[:select] || o[:select].empty? clone(h) end
Modify the RETURNING clause, only
supported on a few databases. If returning is used, instead of insert
returning the autogenerated primary key or update/delete returning the
number of modified rows, results are returned using
fetch_rows
.
DB[:items].returning # RETURNING * DB[:items].returning(nil) # RETURNING NULL DB[:items].returning(:id, :name) # RETURNING id, name
# File lib/sequel/dataset/query.rb, line 787 def returning(*values) raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) clone(:returning=>values.freeze) end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb, line 799 def reverse(*order, &block) if order.empty? && !block cached_dataset(:_reverse_ds){order(*invert_order(@opts[:order]))} else virtual_row_columns(order, block) order(*invert_order(order.empty? ? @opts[:order] : order.freeze)) end end
Alias of reverse
# File lib/sequel/dataset/query.rb, line 809 def reverse_order(*order, &block) reverse(*order, &block) end
Returns a copy of the dataset with the columns selected changed to the
given columns. This also takes a virtual row block, similar to
where
.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb, line 820 def select(*columns, &block) virtual_row_columns(columns, block) clone(:select => columns.freeze) end
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items DB[:items].select_all(:items) # SELECT items.* FROM items DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
# File lib/sequel/dataset/query.rb, line 832 def select_all(*tables) if tables.empty? clone(:select => nil) else select(*tables.map{|t| i, a = split_alias(t); a || i}.map!{|t| SQL::ColumnAll.new(t)}.freeze) end end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb, line 847 def select_append(*columns, &block) cur_sel = @opts[:select] if !cur_sel || cur_sel.empty? unless supports_select_all_and_column? return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_append(*columns, &block) end cur_sel = [WILDCARD] end select(*(cur_sel + columns), &block) end
Set both the select and group clauses with the given columns
.
Column aliases may be supplied, and will be included in the select clause.
This also takes a virtual row block similar to where
.
DB[:items].select_group(:a, :b) # SELECT a, b FROM items GROUP BY a, b DB[:items].select_group(:c___a){f(c2)} # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
# File lib/sequel/dataset/query.rb, line 867 def select_group(*columns, &block) virtual_row_columns(columns, block) select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) end
Alias for select_append.
# File lib/sequel/dataset/query.rb, line 873 def select_more(*columns, &block) select_append(*columns, &block) end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb, line 886 def server(servr) clone(:server=>servr) end
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.
# File lib/sequel/dataset/query.rb, line 893 def server?(server) if db.sharded? && !opts[:server] server(server) else self end end
Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.
# File lib/sequel/dataset/query.rb, line 902 def skip_limit_check cached_dataset(:_skip_limit_check_ds) do clone(:skip_limit_check=>true) end end
Skip locked rows when returning results from this dataset.
# File lib/sequel/dataset/query.rb, line 909 def skip_locked cached_dataset(:_skip_locked_ds) do raise(Error, 'This dataset does not support skipping locked rows') unless supports_skip_locked? clone(:skip_locked=>true) end end
Unbind bound variables from this dataset’s filter and return an array of two objects. The first object is a modified dataset where the filter has been replaced with one that uses bound variable placeholders. The second object is the hash of unbound variables. You can then prepare and execute (or just call) the dataset with the bound variables to get results.
ds, bv = DB[:items].where(:a=>1).unbind ds # SELECT * FROM items WHERE (a = $a) bv # {:a => 1} ds.call(:select, bv)
# File lib/sequel/dataset/query.rb, line 926 def unbind Sequel::Deprecation.deprecate("Dataset#unbind", "Switch to using placeholders manually instead of unbinding them") u = Unbinder.new ds = clone(:where=>u.transform(opts[:where]), :join=>u.transform(opts[:join])) [ds, u.binds] end
Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.
DB[:items].group(:a).having(:a=>1).where(:b).unfiltered # SELECT * FROM items GROUP BY a
# File lib/sequel/dataset/query.rb, line 937 def unfiltered cached_dataset(:_unfiltered_ds){clone(:where => nil, :having => nil)} end
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
DB[:items].group(:a).having(:a=>1).where(:b).ungrouped # SELECT * FROM items WHERE b
# File lib/sequel/dataset/query.rb, line 945 def ungrouped cached_dataset(:_ungrouped_ds){clone(:group => nil, :having => nil)} end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
Use the given value as the #from_self alias
Set to true to use UNION ALL instead of UNION, so duplicate rows can occur
Set to false to not wrap the returned dataset in a #from_self, use with care.
DB[:items].union(DB[:other_items]) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 DB[:items].union(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 965 def union(dataset, opts=OPTS) compound_clone(:union, dataset, opts) end
Returns a copy of the dataset with no limit or offset.
DB[:items].limit(10, 20).unlimited # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 972 def unlimited cached_dataset(:_unlimited_ds){clone(:limit=>nil, :offset=>nil)} end
Returns a copy of the dataset with no order.
DB[:items].order(:a).unordered # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 979 def unordered cached_dataset(:_unordered_ds){clone(:order=>nil)} end
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
Accepts the following argument types:
list of equality/inclusion expressions
depends:
If first member is a string, assumes the rest of the arguments are parameters and interpolates them into the string.
If all members are arrays of length two, treats the same way as a hash, except it allows for duplicate keys to be specified.
Otherwise, treats each argument as a separate condition.
taken literally
taken as a boolean column argument (e.g. WHERE active)
an existing condition expression, probably created using the Sequel expression filter DSL.
where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the “Virtual Rows” guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].where(:id => 3) # SELECT * FROM items WHERE (id = 3) DB[:items].where('price < ?', 100) # SELECT * FROM items WHERE price < 100 DB[:items].where([[:id, [1,2,3]], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].where('price < 100') # SELECT * FROM items WHERE price < 100 DB[:items].where(:active) # SELECT * FROM items WHERE :active DB[:items].where{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple where calls can be chained for scoping:
software = dataset.where(:category => 'software').where{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the “Dataset Filtering” guide for more examples and details.
# File lib/sequel/dataset/query.rb, line 1033 def where(*cond, &block) add_filter(:where, cond, &block) end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
Specify the arguments/columns for the CTE, should be an array of symbols.
Specify that this is a recursive CTE
DB[:items].with(:items, DB[:syx].where(:name.like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 1045 def with(name, dataset, opts=OPTS) raise(Error, 'This dataset does not support common table expressions') unless supports_cte? if hoist_cte?(dataset) s, ds = hoist_cte(dataset) s.with(name, ds, opts) else clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]).freeze) end end
Return a clone of the dataset extended with the given modules. Note that like Object#extend, when multiple modules are provided as arguments the cloned dataset is extended with the modules in reverse order. If a block is provided, a module is created using the block and the clone is extended with that module after any modules given as arguments.
# File lib/sequel/dataset/query.rb, line 1090 def with_extend(*mods, &block) c = _clone(:freeze=>false) c.extend(*mods) unless mods.empty? c.extend(Module.new(&block)) if block c.freeze if frozen? # SEQUEL5: Remove if frozen? c end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
Specify the arguments/columns for the CTE, should be an array of symbols.
Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts.
DB[:t].with_recursive(:t, DB[:i1].select(:id, :parent_id).where(:parent_id=>nil), DB[:i1].join(:t, :id=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:id, :parent_id]) # WITH RECURSIVE "t"("id", "parent_id") AS ( # SELECT "id", "parent_id" FROM "i1" WHERE ("parent_id" IS NULL) # UNION ALL # SELECT "i1"."id", "i1"."parent_id" FROM "i1" INNER JOIN "t" ON ("t"."id" = "i1"."parent_id") # ) SELECT * FROM "t"
# File lib/sequel/dataset/query.rb, line 1071 def with_recursive(name, nonrecursive, recursive, opts=OPTS) raise(Error, 'This datatset does not support common table expressions') unless supports_cte? if hoist_cte?(nonrecursive) s, ds = hoist_cte(nonrecursive) s.with_recursive(name, ds, recursive, opts) elsif hoist_cte?(recursive) s, ds = hoist_cte(recursive) s.with_recursive(name, nonrecursive, ds, opts) else clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]).freeze) end end
Returns a cloned dataset with the given row_proc.
ds = DB[:items] ds.all # => [{:id=>2}] ds.with_row_proc(proc(&:invert)).all # => [{2=>:id}]
# File lib/sequel/dataset/query.rb, line 1113 def with_row_proc(callable) clone(:row_proc=>callable) end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, :b=>1) # INSERT INTO items (b) VALUES (1)
Note that datasets that specify custom SQL using this method will generally ignore future dataset methods that modify the SQL used, as specifying custom SQL overrides Sequel’s SQL generator. You should probably limit yourself to the following dataset methods when using this method:
each
all
#single_record (if only one record could be returned)
#single_value (if only one record could be returned, and a single column is selected)
map
delete (if a DELETE statement)
update (if an UPDATE statement, with no arguments)
insert (if an INSERT statement, with no arguments)
truncate (if a TRUNCATE statement, with no arguments)
If you want to use arbitrary dataset methods on a dataset that uses custom SQL, call #from_self on the dataset, which wraps the custom SQL in a subquery, and allows normal dataset methods that modify the SQL to work.
# File lib/sequel/dataset/query.rb, line 1149 def with_sql(sql, *args) if sql.is_a?(Symbol) sql = send(sql, *args) else sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? end clone(:sql=>sql) end
Add the dataset to the list of compounds
# File lib/sequel/dataset/query.rb, line 1161 def compound_clone(type, dataset, opts) if hoist_cte?(dataset) s, ds = hoist_cte(dataset) return s.compound_clone(type, ds, opts) end ds = compound_from_self.clone(:compounds=>(Array(@opts[:compounds]).map(&:dup) + [[type, dataset.compound_from_self, opts[:all]].freeze]).freeze) opts[:from_self] == false ? ds : ds.from_self(opts) end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb, line 1171 def options_overlap(opts) !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? end
Whether this dataset is a simple select from an underlying table, such as:
SELECT * FROM table SELECT table.* FROM table
# File lib/sequel/dataset/query.rb, line 1182 def simple_select_all? return false unless (f = @opts[:from]) && f.length == 1 non_sql = non_sql_options o = @opts.reject{|k,v| v.nil? || non_sql.include?(k)} from = f.first from = from.expression if from.is_a?(SQL::AliasedExpression) if SIMPLE_SELECT_ALL_ALLOWED_FROM.any?{|x| from.is_a?(x)} case o.length when 1 true when 2 (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll) else false end else false end end
These methods all execute the dataset’s SQL on the database. They don’t return modified datasets, so if used in a method chain they should be the last method called.
Action methods defined by Sequel that execute code on the database.
The clone options to use when retriveing columns for a dataset.
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {:id=>0, :name=>'Zero'} << DB[:old_items].select(:id, name)
# File lib/sequel/dataset/actions.rb, line 27 def <<(arg) insert(arg) self end
Returns the first record matching the conditions. Examples:
DB[:table][:id=>1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=1}
# File lib/sequel/dataset/actions.rb, line 36 def [](*conditions) raise(Error, 'You cannot call Dataset#[] with an integer or with no arguments') if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 first(*conditions) end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb, line 49 def all(&block) _all(block){|a| each{|r| a << r}} end
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3 DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 60 def avg(arg=Sequel.virtual_row(&Proc.new)) _aggregate(:avg, arg) end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some
information about each column (e.g. database type), see
Database#schema
.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 73 def columns _columns || columns! end
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
DB[:table].columns! # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 82 def columns! ds = clone(COLUMNS_CLONE_OPTIONS) ds.each{break} if cols = ds.cache[:_columns] self.columns = cols else [] end end
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1 # => 3 DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1 # => 2 DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 104 def count(arg=(no_arg=true), &block) if no_arg && !block cached_dataset(:_count_ds) do aggregate_dataset.select(Sequel.function(:count).*.as(:count)).single_value_ds #aggregate_dataset.select(COUNT_SELECT).single_value_ds # SEQUEL5 end.single_value!.to_i else if block if no_arg arg = Sequel.virtual_row(&block) else raise Error, 'cannot provide both argument and block to Dataset#count' end end _aggregate(:count, arg) end end
Deletes the records in the dataset. The returned value should be number of records deleted, but that is adapter dependent.
DB[:table].delete # DELETE * FROM table # => 3
# File lib/sequel/dataset/actions.rb, line 131 def delete(&block) sql = delete_sql if uses_returning?(:delete) returning_fetch_rows(sql, &block) else execute_dui(sql) end end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are
running additional queries inside the provided block. If you are running
queries inside the block, you should use all
instead of
each
for the outer queries, or use a separate thread or shard
inside each
.
# File lib/sequel/dataset/actions.rb, line 149 def each if rp = row_proc fetch_rows(select_sql){|r| yield rp.call(r)} else fetch_rows(select_sql){|r| yield r} end self end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb, line 162 def empty? cached_dataset(:_empty_ds) do single_value_ds.unordered.select(Sequel::SQL::AliasedExpression.new(1, :one)) # single_value_ds.unordered.select(EMPTY_SELECT) # SEQUEL5 end.single_value!.nil? end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.
If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).
Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(:id=>2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first("id = 3") # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first("id = ?", 4) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first("id > ?", 4){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb, line 207 def first(*args, &block) case args.length when 0 unless block return single_record end when 1 arg = args[0] if arg.is_a?(Integer) res = if block if loader = cached_placeholder_literalizer(:_first_integer_cond_loader) do |pl| where(pl.arg).limit(pl.arg) end loader.all(filter_expr(&block), arg) else where(&block).limit(arg).all end else if loader = cached_placeholder_literalizer(:_first_integer_loader) do |pl| limit(pl.arg) end loader.all(arg) else limit(arg).all end end return res end args = arg end if loader = cached_placeholder_literalizer(:_first_cond_loader) do |pl| _single_record_ds.where(pl.arg) end loader.first(filter_expr(args, &block)) else _single_record_ds.where(args, &block).single_record! end end
Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow exception.
# File lib/sequel/dataset/actions.rb, line 253 def first!(*args, &block) first(*args, &block) || raise(Sequel::NoMatchingRow.new(self)) end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1 # => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # => [3, 'foo'] DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # => [6, 'foo']
# File lib/sequel/dataset/actions.rb, line 275 def get(column=(no_arg=true; nil), &block) ds = naked if block raise(Error, 'Must call Dataset#get with an argument or a block, not both') unless no_arg ds = ds.select(&block) column = ds.opts[:select] column = nil if column.is_a?(Array) && column.length < 2 else case column when Array ds = ds.select(*column) when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression if loader = cached_placeholder_literalizer(:_get_loader) do |pl| ds.single_value_ds.select(pl.arg) end return loader.get(column) end ds = ds.select(column) else if loader = cached_placeholder_literalizer(:_get_alias_loader) do |pl| ds.single_value_ds.select(Sequel.as(pl.arg, :v)) end return loader.get(column) end ds = ds.select(Sequel.as(column, :v)) end end if column.is_a?(Array) if r = ds.single_record r.values_at(*hash_key_symbols(column)) end else ds.single_value end end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records.
When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted.
Set the server/shard to use for the transaction and insert queries.
Same as :commit_every, :commit_every takes precedence.
# File lib/sequel/dataset/actions.rb, line 341 def import(columns, values, opts=OPTS) return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) return if values.empty? raise(Error, 'Using Sequel::Dataset#import with an empty column array is not allowed') if columns.empty? ds = opts[:server] ? server(opts[:server]) : self if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice)) offset = 0 rows = [] while offset < values.length rows << ds._import(columns, values[offset, slice_size], opts) offset += slice_size end rows.flatten else ds._import(columns, values, opts) end end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
insert
handles a number of different argument formats:
Most common format, treats keys as columns and values as values
Treats entries as values, with no columns
Treats first array as columns, second array as values
Treats as an insert based on a selection from the dataset given, with no columns
Treats as an insert based on a selection from the dataset given, with the columns given by the array.
Examples:
DB[:items].insert # INSERT INTO items DEFAULT VALUES DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES DB[:items].insert([1,2,3]) # INSERT INTO items VALUES (1, 2, 3) DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(:a => 1, :b => 2) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(DB[:old_items]) # INSERT INTO items SELECT * FROM old_items DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb, line 396 def insert(*values, &block) sql = insert_sql(*values) if uses_returning?(:insert) returning_fetch_rows(sql, &block) else execute_insert(sql) end end
Returns the interval between minimum and maximum values for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].interval(:id) # SELECT (max(id) - min(id)) FROM table LIMIT 1 # => 6 DB[:table].interval{function(column)} # SELECT (max(function(column)) - min(function(column))) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 412 def interval(column=Sequel.virtual_row(&Proc.new)) if loader = cached_placeholder_literalizer(:_interval_loader) do |pl| arg = pl.arg aggregate_dataset.limit(1).select((SQL::Function.new(:max, arg) - SQL::Function.new(:min, arg)).as(:interval)) end loader.get(column) else aggregate_dataset.get{(max(column) - min(column)).as(:interval)} end end
Reverses the order and then runs first with the given arguments and
block. Note that this will not necessarily give you the last record in the
dataset, unless you have an unambiguous order. If there is not currently
an order for this dataset, raises an Error
.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb, line 434 def last(*args, &block) raise(Error, 'No order specified') unless @opts[:order] reverse.first(*args, &block) end
Maps column values for each record in the dataset (if a column name is
given), or performs the stock mapping functionality of
Enumerable
otherwise. Raises an Error
if both an
argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb, line 453 def map(column=nil, &block) if column raise(Error, 'Must call Dataset#map with either an argument or a block, not both') if block return naked.map(column) if row_proc if column.is_a?(Array) super(){|r| r.values_at(*column)} else super(){|r| r[column]} end else super(&block) end end
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10 DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 474 def max(arg=Sequel.virtual_row(&Proc.new)) _aggregate(:max, arg) end
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1 DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # => 0
# File lib/sequel/dataset/actions.rb, line 485 def min(arg=Sequel.virtual_row(&Proc.new)) _aggregate(:min, arg) end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{:x => 1}, {:x => 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as import.
# File lib/sequel/dataset/actions.rb, line 501 def multi_insert(hashes, opts=OPTS) return if hashes.empty? columns = hashes.first.keys import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) end
Yields each row in the dataset, but interally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.
Sequel checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.
Options:
The number of rows to fetch per query. Defaults to 1000.
The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some Sequel adapters have optimized implementations that will use cursors or streaming regardless of the :strategy option used.
If the :strategy=>:filter option is used, this option should be a proc that accepts the last retreived row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases.
Examples:
DB[:table].order(:id).paged_each{|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000 # ... DB[:table].order(:id).paged_each(:rows_per_fetch=>100){|row| } # SELECT * FROM table ORDER BY id LIMIT 100 # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100 # ... DB[:table].order(:id).paged_each(:strategy=>:filter){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ... DB[:table].order(:table__id).paged_each(:strategy=>:filter, :filter_values=>proc{|row, exprs| [row[:id]]}){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ...
# File lib/sequel/dataset/actions.rb, line 558 def paged_each(opts=OPTS) unless @opts[:order] raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" end unless block_given? return enum_for(:paged_each, opts) end total_limit = @opts[:limit] offset = @opts[:offset] if server = @opts[:server] opts = Hash[opts] opts[:server] = server end rows_per_fetch = opts[:rows_per_fetch] || 1000 strategy = if offset || total_limit :offset else opts[:strategy] || :offset end db.transaction(opts) do case strategy when :filter filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}} base_ds = ds = limit(rows_per_fetch) while ds last_row = nil ds.each do |row| last_row = row yield row end ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row) end else offset ||= 0 num_rows_yielded = rows_per_fetch total_rows = 0 while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) if total_limit && total_rows + rows_per_fetch > total_limit rows_per_fetch = total_limit - total_rows end num_rows_yielded = 0 limit(rows_per_fetch, offset).each do |row| num_rows_yielded += 1 total_rows += 1 if total_limit yield row end offset += rows_per_fetch end end end self end
Returns a Range
instance made from the minimum and maximum
values for the given column/expression. Uses a virtual row block if no
argument is given.
DB[:table].range(:id) # SELECT max(id) AS v1, min(id) AS v2 FROM table LIMIT 1 # => 1..10 DB[:table].interval{function(column)} # SELECT max(function(column)) AS v1, min(function(column)) AS v2 FROM table LIMIT 1 # => 0..7
# File lib/sequel/dataset/actions.rb, line 625 def range(column=Sequel.virtual_row(&Proc.new)) r = if loader = cached_placeholder_literalizer(:_range_loader) do |pl| arg = pl.arg aggregate_dataset.limit(1).select(SQL::Function.new(:min, arg).as(:v1), SQL::Function.new(:max, arg).as(:v2)) end loader.first(column) else aggregate_dataset.select{[min(column).as(v1), max(column).as(v2)]}.first end if r (r[:v1]..r[:v2]) end end
Returns a hash with key_column values as keys and value_column values as values. Similar to #to_hash, but only selects the columns given. Like #to_hash, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 658 def select_hash(key_column, value_column, opts = OPTS) _select_hash(:to_hash, key_column, value_column, opts) end
Returns a hash with key_column values as keys and an array of value_column values. Similar to #to_hash_groups, but only selects the columns given. Like #to_hash_groups, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 678 def select_hash_groups(key_column, value_column, opts = OPTS) _select_hash(:to_hash_groups, key_column, value_column, opts) end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error if called with both an argument and a block.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 702 def select_map(column=nil, &block) _select_map(column, false, &block) end
The same as #select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 722 def select_order_map(column=nil, &block) _select_map(column, true, &block) end
Limits the dataset to one record, and returns the first record in the
dataset, or nil if the dataset has no records. Users should probably use
first
instead of this method. Example:
DB[:test].single_record # SELECT * FROM test LIMIT 1 # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb, line 732 def single_record _single_record_ds.single_record! end
Returns the first record in dataset, without limiting the dataset. Returns
nil if the dataset has no records. Users should probably use
first
instead of this method. This should only be used if you
know the dataset is already limited to a single record. This method may be
desirable to use for performance reasons, as it does not clone the
receiver. Example:
DB[:test].single_record! # SELECT * FROM test # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb, line 744 def single_record! with_sql_first(select_sql) end
Returns the first value of the first record in the dataset. Returns nil if
dataset is empty. Users should generally use get
instead of
this method. Example:
DB[:test].single_value # SELECT * FROM test LIMIT 1 # => 'value'
# File lib/sequel/dataset/actions.rb, line 754 def single_value single_value_ds.each do |r| r.each{|_, v| return v} end nil end
Returns the first value of the first record in the dataset, without
limiting the dataset. Returns nil if the dataset is empty. Users should
generally use get
instead of this method. Should not be used
on graphed datasets or datasets that have row_procs that don’t return
hashes. This method may be desirable to use for performance reasons, as it
does not clone the receiver.
DB[:test].single_value! # SELECT * FROM test # => 'value'
# File lib/sequel/dataset/actions.rb, line 769 def single_value! with_sql_single_value(select_sql) end
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # => 55 DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb, line 780 def sum(arg=Sequel.virtual_row(&Proc.new)) _aggregate(:sum, arg) end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].to_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} DB[:table].to_hash([:id, :name]) # SELECT * FROM table # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob']=>{:id=>2, :name=>'Bob'}, ...}
Options:
Use all instead of each to retrieve the objects
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.
# File lib/sequel/dataset/actions.rb, line 809 def to_hash(key_column, value_column = nil, opts = OPTS) h = opts[:hash] || {} meth = opts[:all] ? :all : :each if value_column return naked.to_hash(key_column, value_column, opts) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} else send(meth){|r| h[r[key_column]] = r.values_at(*value_column)} end else if key_column.is_a?(Array) send(meth){|r| h[r.values_at(*key_column)] = r[value_column]} else send(meth){|r| h[r[key_column]] = r[value_column]} end end elsif key_column.is_a?(Array) send(meth){|r| h[key_column.map{|k| r[k]}] = r} else send(meth){|r| h[r[key_column]] = r} end h end
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} DB[:table].to_hash_groups(:name) # SELECT * FROM table # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
Options:
Use all instead of each to retrieve the objects
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.
# File lib/sequel/dataset/actions.rb, line 859 def to_hash_groups(key_column, value_column = nil, opts = OPTS) h = opts[:hash] || {} meth = opts[:all] ? :all : :each if value_column return naked.to_hash_groups(key_column, value_column, opts) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} else send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} end else if key_column.is_a?(Array) send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} else send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]} end end elsif key_column.is_a?(Array) send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r} else send(meth){|r| (h[r[key_column]] ||= []) << r} end h end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb, line 889 def truncate execute_ddl(truncate_sql) end
Updates values for the dataset. The returned value is generally the number
of rows updated, but that is adapter dependent. values
should
a hash where the keys are columns to set and values are the values to which
to set the columns.
DB[:table].update(:x=>nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(:x=>Sequel[:x]+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0 # => 10
# File lib/sequel/dataset/actions.rb, line 903 def update(values=OPTS, &block) sql = update_sql(values) if uses_returning?(:update) returning_fetch_rows(sql, &block) else execute_dui(sql) end end
Run the given SQL and return an array of all rows. If a block is given, each row is yielded to the block after all rows are loaded. See with_sql_each.
# File lib/sequel/dataset/actions.rb, line 914 def with_sql_all(sql, &block) _all(block){|a| with_sql_each(sql){|r| a << r}} end
Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing #with_sql(sql).delete. It’s significantly faster as it does not require cloning the current dataset.
# File lib/sequel/dataset/actions.rb, line 921 def with_sql_delete(sql) execute_dui(sql) end
Run the given SQL and yield each returned row to the block.
This method should not be called on a shared dataset if the columns selected in the given SQL do not match the columns in the receiver.
# File lib/sequel/dataset/actions.rb, line 930 def with_sql_each(sql) if rp = row_proc fetch_rows(sql){|r| yield rp.call(r)} else fetch_rows(sql){|r| yield r} end self end
Run the given SQL and return the first row, or nil if no rows were returned. See with_sql_each.
# File lib/sequel/dataset/actions.rb, line 941 def with_sql_first(sql) with_sql_each(sql){|r| return r} nil end
Execute the given SQL and (on most databases) return the primary key of the inserted row.
# File lib/sequel/dataset/actions.rb, line 957 def with_sql_insert(sql) execute_insert(sql) end
Run the given SQL and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL given should select only a single value. See with_sql_each.
# File lib/sequel/dataset/actions.rb, line 949 def with_sql_single_value(sql) if r = with_sql_first(sql) r.each{|_, v| return v} end end
Internals of import. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql and execute each statement it gives separately.
# File lib/sequel/dataset/actions.rb, line 966 def _import(columns, values, opts) trans_opts = Hash[opts].merge!(:server=>@opts[:server]) if opts[:return] == :primary_key @db.transaction(trans_opts){values.map{|v| insert(columns, v)}} else stmts = multi_insert_sql(columns, values) @db.transaction(trans_opts){stmts.each{|st| execute_dui(st)}} end end
Return an array of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb, line 977 def _select_map_multiple(ret_cols) map{|r| r.values_at(*ret_cols)} end
Returns an array of the first value in each row.
# File lib/sequel/dataset/actions.rb, line 982 def _select_map_single k = nil map{|r| r[k||=r.keys.first]} end
A dataset for returning single values from the current dataset.
# File lib/sequel/dataset/actions.rb, line 988 def single_value_ds clone(:limit=>1).ungraphed.naked end
These are methods you can call to see what SQL will be generated by the dataset.
Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb, line 14 def exists SQL::PlaceholderLiteralString.new(EXISTS, [self], true) end
Returns an INSERT SQL query string. See insert
.
DB[:items].insert_sql(:a=>1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb, line 22 def insert_sql(*values) return static_sql(@opts[:sql]) if @opts[:sql] check_modification_allowed! columns = [] case values.size when 0 return insert_sql({}) when 1 case vals = values[0] when Hash values = [] vals.each do |k,v| columns << k values << v end when Dataset, Array, LiteralString values = vals end when 2 if (v0 = values[0]).is_a?(Array) && ((v1 = values[1]).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) columns, values = v0, v1 raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length end end if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? columns, values = insert_empty_columns_values end clone(:columns=>columns, :values=>values).send(:_insert_sql) end
Append a literal representation of a value to the given SQL string.
If an unsupported object is given, an Error
is raised.
# File lib/sequel/dataset/sql.rb, line 59 def literal_append(sql, v) case v when Symbol if skip_symbol_cache? literal_symbol_append(sql, v) else unless l = db.literal_symbol(v) l = String.new literal_symbol_append(l, v) db.literal_symbol_set(v, l) end sql << l end when String case v when LiteralString sql << v when SQL::Blob literal_blob_append(sql, v) else literal_string_append(sql, v) end when Integer sql << literal_integer(v) when Hash literal_hash_append(sql, v) when SQL::Expression literal_expression_append(sql, v) when Float sql << literal_float(v) when BigDecimal sql << literal_big_decimal(v) when NilClass sql << literal_nil when TrueClass sql << literal_true when FalseClass sql << literal_false when Array literal_array_append(sql, v) when Time v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v) when DateTime literal_datetime_append(sql, v) when Date sql << literal_date(v) when Dataset literal_dataset_append(sql, v) else literal_other_append(sql, v) end end
Returns an array of insert statements for inserting multiple records. This
method is used by multi_insert
to format insert statements and
expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 118 def multi_insert_sql(columns, values) case multi_insert_sql_strategy when :values sql = LiteralString.new('VALUES ') expression_list_append(sql, values.map{|r| Array(r)}) [insert_sql(columns, sql)] when :union c = false sql = LiteralString.new u = ' UNION ALL SELECT ' f = empty_from_sql values.each do |v| if c sql << u else sql << 'SELECT ' c = true end expression_list_append(sql, v) sql << f if f end [insert_sql(columns, sql)] else values.map{|r| insert_sql(columns, r)} end end
Same as select_sql
, not aliased directly to make subclassing
simpler.
# File lib/sequel/dataset/sql.rb, line 146 def sql select_sql end
Returns a TRUNCATE SQL query string. See
truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb, line 153 def truncate_sql if opts[:sql] static_sql(opts[:sql]) else check_truncation_allowed! check_not_limited!(:truncate) raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] t = String.new source_list_append(t, opts[:from]) _truncate_sql(t) end end
Formats an UPDATE statement using the
given values. See update
.
DB[:items].update_sql(:price => 100, :category => 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error
if the dataset is grouped or includes more
than one table.
# File lib/sequel/dataset/sql.rb, line 173 def update_sql(values = OPTS) return static_sql(opts[:sql]) if opts[:sql] check_modification_allowed! check_not_limited!(:update) case values when LiteralString # nothing when String Sequel::Deprecation.deprecate("Calling Sequel::Dataset#update/update_sql with a plain string", "Use Sequel.lit(#{values.inspect}) to create a literal string and pass that to update/update_sql, or use the auto_literal_strings extension") # raise Error, "plain string passed to Dataset#update" # SEQUEL5 end clone(:values=>values).send(:_update_sql) end
These methods all return booleans, with most describing whether or not the dataset supports a feature.
Whether this dataset will provide accurate number of rows matched for delete and update statements. Accurate in this case is the number of rows matched by the dataset’s filter.
# File lib/sequel/dataset/features.rb, line 19 def provides_accurate_rows_matched? true end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb, line 12 def quote_identifiers? @opts.fetch(:quote_identifiers, true) end
Whether you must use a column alias list for recursive CTEs (false by default).
# File lib/sequel/dataset/features.rb, line 25 def recursive_cte_requires_column_aliases? false end
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer)
# File lib/sequel/dataset/features.rb, line 37 def requires_placeholder_type_specifiers? false end
Whether the dataset requires SQL standard datetimes (false by default, as most allow strings with ISO 8601 format).
# File lib/sequel/dataset/features.rb, line 31 def requires_sql_standard_datetimes? false end
Whether the dataset supports common table expressions (the WITH clause). If
given, type
can be :select, :insert, :update, or :delete, in
which case it determines whether WITH is supported for the respective
statement type.
# File lib/sequel/dataset/features.rb, line 44 def supports_cte?(type=:select) false end
Whether the dataset supports common table expressions (the WITH clause) in subqueries. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
# File lib/sequel/dataset/features.rb, line 51 def supports_cte_in_subqueries? false end
Whether the database supports derived column lists (e.g. “table_expr AS table_alias(column_alias1, column_alias2, …)”), true by default.
# File lib/sequel/dataset/features.rb, line 58 def supports_derived_column_lists? true end
Whether the dataset supports CUBE with GROUP BY.
# File lib/sequel/dataset/features.rb, line 68 def supports_group_cube? false end
Whether the dataset supports ROLLUP with GROUP BY.
# File lib/sequel/dataset/features.rb, line 73 def supports_group_rollup? false end
Whether the dataset supports GROUPING SETS with GROUP BY.
# File lib/sequel/dataset/features.rb, line 78 def supports_grouping_sets? false end
Whether this dataset supports the insert_select
method for
returning all columns values directly from an insert query.
# File lib/sequel/dataset/features.rb, line 84 def supports_insert_select? supports_returning?(:insert) end
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
# File lib/sequel/dataset/features.rb, line 89 def supports_intersect_except? true end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset/features.rb, line 99 def supports_is_true? true end
Whether the dataset supports the JOIN table USING (column1, …) syntax.
# File lib/sequel/dataset/features.rb, line 104 def supports_join_using? true end
Whether modifying joined datasets is supported.
# File lib/sequel/dataset/features.rb, line 119 def supports_modifying_joins? false end
Whether the IN/NOT IN operators support multiple columns when an array of values is given.
# File lib/sequel/dataset/features.rb, line 125 def supports_multiple_column_in? true end
Whether the dataset supports pattern matching by regular expressions.
# File lib/sequel/dataset/features.rb, line 141 def supports_regexp? false end
Whether the dataset supports REPLACE syntax, false by default.
# File lib/sequel/dataset/features.rb, line 146 def supports_replace? false end
Whether the RETURNING clause is
supported for the given type of query. type
can be :insert,
:update, or :delete.
# File lib/sequel/dataset/features.rb, line 152 def supports_returning?(type) false end
Whether the dataset supports skipping locked rows when returning data.
# File lib/sequel/dataset/features.rb, line 157 def supports_skip_locked? false end
Whether the dataset supports timezones in literal timestamps
# File lib/sequel/dataset/features.rb, line 167 def supports_timestamp_timezones? false end
Whether the dataset supports fractional seconds in literal timestamps
# File lib/sequel/dataset/features.rb, line 172 def supports_timestamp_usecs? true end
Whether the dataset supports window functions.
# File lib/sequel/dataset/features.rb, line 177 def supports_window_functions? false end
These methods don’t fit cleanly into another section.
The database related to this dataset. This is the Database instance that will execute all of this dataset’s queries.
The hash of options for this dataset, keys are symbols.
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Sequel::Database#dataset method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb, line 34 def initialize(db) @db = db @opts = {} # OPTS # SEQUEL5 @cache = {} end
Define a hash value such that datasets with the same class, DB, and opts will be considered equal.
# File lib/sequel/dataset/misc.rb, line 42 def ==(o) o.is_a?(self.class) && db == o.db && opts == o.opts end
An object representing the current date or time, should be an instance of Sequel.datetime_class.
# File lib/sequel/dataset/misc.rb, line 48 def current_datetime Sequel.datetime_class.now end
Similar to clone, but returns an unfrozen clone if the receiver is frozen.
# File lib/sequel/dataset/misc.rb, line 63 def dup _clone(:freeze=>false) end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')}
# File lib/sequel/dataset/misc.rb, line 81 def each_server db.servers.each{|s| yield server(s)} end
Alias for ==
# File lib/sequel/dataset/misc.rb, line 53 def eql?(o) self == o end
Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:
ds.escape_like("foo\\%_") # 'foo\\\%\_'
# File lib/sequel/dataset/misc.rb, line 90 def escape_like(string) string.gsub(/[\%_]/){|m| "\\#{m}"} end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb, line 114 def first_source first_source_alias end
The first source (primary table) for this dataset. If the dataset
doesn’t have a table, raises an Error
. If the table is
aliased, returns the aliased name.
DB[:table].first_source_alias # => :table DB[:table___t].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb, line 126 def first_source_alias source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.alias when Symbol _, _, aliaz = split_symbol(s) aliaz ? aliaz.to_sym : s else s end end
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table # => :table DB[:table___t].first_source_table # => :table
# File lib/sequel/dataset/misc.rb, line 151 def first_source_table source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.expression when Symbol sch, table, aliaz = split_symbol(s) aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s else s end end
Freeze the opts when freezing the dataset.
# File lib/sequel/dataset/misc.rb, line 96 def freeze @opts.freeze super end
Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.
# File lib/sequel/dataset/misc.rb, line 169 def hash [self.class, db, opts].hash end
Returns a string representation of the dataset including the class name and the corresponding SQL select statement.
# File lib/sequel/dataset/misc.rb, line 175 def inspect "#<#{visible_class_name}: #{sql.inspect}>" end
Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
# File lib/sequel/dataset/misc.rb, line 180 def joined_dataset? !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]) end
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies
# File lib/sequel/dataset/misc.rb, line 186 def row_number_column :x_sequel_row_number_x end
Splits a possible implicit alias in c
, handling both
SQL::AliasedExpressions and Symbols. Returns an array of two elements,
with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb, line 199 def split_alias(c) case c when Symbol c_table, column, aliaz = split_symbol(c) [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] when SQL::AliasedExpression [c.expression, c.alias] when SQL::JoinClause [c.table, c.table_alias] else [c, nil] end end
This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression containing one of those. In other cases, this returns nil
# File lib/sequel/dataset/misc.rb, line 218 def unqualified_column_for(v) unless v.is_a?(String) _unqualified_column_for(v) end end
Creates a unique table alias that hasn’t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with “_N” if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1 DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # => :table_3
# File lib/sequel/dataset/misc.rb, line 246 def unused_table_alias(table_alias, used_aliases = []) table_alias = alias_symbol(table_alias) used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] if used_aliases.include?(table_alias) i = 0 loop do ta = :"#{table_alias}_#{i}" return ta unless used_aliases.include?(ta) i += 1 end else table_alias end end
Return a modified dataset with quote_identifiers set.
# File lib/sequel/dataset/misc.rb, line 263 def with_quote_identifiers(v) clone(:quote_identifiers=>v, :skip_symbol_cache=>true) end
The cached columns for the current dataset.
# File lib/sequel/dataset/misc.rb, line 294 def _columns cache_get(:_columns) end
Retreive a value from the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb, line 276 def cache_get(k) Sequel.synchronize{@cache[k]} end
Set a value in the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb, line 281 def cache_set(k, v) Sequel.synchronize{@cache[k] = v} end
Clear the columns hash for the current dataset. This is not a thread safe operation, so it should only be used if the dataset could not be used by another thread (such as one that was just created via clone).
# File lib/sequel/dataset/misc.rb, line 289 def clear_columns_cache @cache.delete(:_columns) end
These methods modify the receiving dataset and should be used with care.
All methods that should have a ! method added that modifies the receiver.
Whether #freeze can actually freeze datasets. True only on ruby 2.4+, as it requires clone(freeze: false)
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
Do not call this method with untrusted input, as that can result in arbitrary code execution.
# File lib/sequel/dataset/mutation.rb, line 20 def self.def_mutation_method(*meths) options = meths.pop if meths.last.is_a?(Hash) mod = options[:module] if options mod ||= self meths.each do |meth| mod.class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) end end
Like extension, but modifies and returns the receiver instead of returning a modified clone.
# File lib/sequel/dataset/mutation.rb, line 33 def extension!(*exts) raise_if_frozen!(%wextension! extension") _extension!(exts) end
Avoid self-referential dataset by cloning.
# File lib/sequel/dataset/mutation.rb, line 39 def from_self!(*args, &block) raise_if_frozen!(%wfrom_self! from_self") @opts = clone.from_self(*args, &block).opts self end
Remove the #row_proc from the current dataset.
# File lib/sequel/dataset/mutation.rb, line 46 def naked! raise_if_frozen!(%wnaked! naked") @opts[:row_proc] = nil self end
Set whether to quote identifiers for this dataset
# File lib/sequel/dataset/mutation.rb, line 53 def quote_identifiers=(v) raise_if_frozen!(%wquote_identifiers= with_quote_identifiers") skip_symbol_cache! @opts[:quote_identifiers] = v end
Override the #row_proc for this dataset
# File lib/sequel/dataset/mutation.rb, line 60 def row_proc=(v) raise_if_frozen!(%wrow_proc= with_row_proc") @opts[:row_proc] = v end
These methods, while public, are not designed to be used directly by the end user.
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.
# File lib/sequel/dataset/sql.rb, line 196 def self.clause_methods(type, clauses) clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze end
Define a dataset literalization method for the given type in the given module, using the given clauses.
Arguments:
Module in which to define method
Type of SQL literalization method to create, either :select, :insert, :update, or :delete
array of clauses that make up the SQL query for the type. This can either be a single array of symbols/strings, or it can be an array of pairs, with the first element in each pair being an if/elsif/else code fragment, and the second element in each pair being an array of symbol/strings for the appropriate branch.
# File lib/sequel/dataset/sql.rb, line 210 def self.def_sql_method(mod, type, clauses) priv = type == :update || type == :insert cacheable = type == :select || type == :delete lines = [] lines << 'private' if priv lines << "def #{'_' if priv}#{type}_sql" lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv lines << "if sql = cache_get(:_#{type}_sql); return sql end" if cacheable lines << 'check_modification_allowed!' << 'check_not_limited!(:delete)' if type == :delete lines << 'sql = @opts[:append_sql] || sql_string_origin' if clauses.all?{|c| c.is_a?(Array)} clauses.each do |i, cs| lines << i lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) end lines << 'end' else lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"}) end lines << "cache_set(:_#{type}_sql, sql) if cache_sql?" if cacheable lines << 'sql' lines << 'end' mod.class_eval lines.join("\n"), __FILE__, __LINE__ end
Append literalization of aliased expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 461 def aliased_expression_sql_append(sql, ae) literal_append(sql, ae.expression) as_sql_append(sql, ae.alias, ae.columns) end
Append literalization of array to SQL string.
# File lib/sequel/dataset/sql.rb, line 467 def array_sql_append(sql, a) if a.empty? sql << '(NULL)' else sql << '(' expression_list_append(sql, a) sql << ')' end end
Append literalization of boolean constant to SQL string.
# File lib/sequel/dataset/sql.rb, line 478 def boolean_constant_sql_append(sql, constant) if (constant == true || constant == false) && !supports_where_true? sql << (constant == true ? '(1 = 1)' : '(1 = 0)') else literal_append(sql, constant) end end
Append literalization of case expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 487 def case_expression_sql_append(sql, ce) sql << '(CASE' if ce.expression? sql << ' ' literal_append(sql, ce.expression) end w = " WHEN " t = " THEN " ce.conditions.each do |c,r| sql << w literal_append(sql, c) sql << t literal_append(sql, r) end sql << " ELSE " literal_append(sql, ce.default) sql << " END)" end
Append literalization of cast expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 507 def cast_sql_append(sql, expr, type) sql << 'CAST(' literal_append(sql, expr) sql << ' AS ' << db.cast_type_literal(type).to_s sql << ')' end
Append literalization of column all selection to SQL string.
# File lib/sequel/dataset/sql.rb, line 515 def column_all_sql_append(sql, ca) qualified_identifier_sql_append(sql, ca.table, WILDCARD) end
Append literalization of complex expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 520 def complex_expression_sql_append(sql, op, args) case op when *IS_OPERATORS r = args[1] if r.nil? || supports_is_true? raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' sql << val << ')' elsif op == :IS complex_expression_sql_append(sql, :"=", args) else complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args[0], nil)]) end when :IN, :"NOT IN" cols = args[0] vals = args[1] col_array = true if cols.is_a?(Array) if vals.is_a?(Array) val_array = true empty_val_array = vals == [] end if empty_val_array literal_append(sql, empty_array_value(op, cols)) elsif col_array if !supports_multiple_column_in? if val_array expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) literal_append(sql, op == :IN ? expr : ~expr) else old_vals = vals vals = vals.naked if vals.is_a?(Sequel::Dataset) vals = vals.to_a val_cols = old_vals.columns complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) end else # If the columns and values are both arrays, use array_sql instead of # literal so that if values is an array of two element arrays, it # will be treated as a value list instead of a condition specifier. sql << '(' literal_append(sql, cols) sql << ' ' << op.to_s << ' ' if val_array array_sql_append(sql, vals) else literal_append(sql, vals) end sql << ')' end else sql << '(' literal_append(sql, cols) sql << ' ' << op.to_s << ' ' literal_append(sql, vals) sql << ')' end when :LIKE, :'NOT LIKE' sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' literal_append(sql, args[1]) sql << " ESCAPE " literal_append(sql, "\\") sql << ')' when :ILIKE, :'NOT ILIKE' complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) when :** function_sql_append(sql, Sequel.function(:power, *args)) when *TWO_ARITY_OPERATORS if REGEXP_OPERATORS.include?(op) && !supports_regexp? raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" end sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' literal_append(sql, args[1]) sql << ')' when *N_ARITY_OPERATORS sql << '(' c = false op_str = " #{op} " args.each do |a| sql << op_str if c literal_append(sql, a) c ||= true end sql << ')' when :NOT sql << 'NOT ' literal_append(sql, args[0]) when :NOOP literal_append(sql, args[0]) when :'B~' sql << '~' literal_append(sql, args[0]) when :extract sql << 'extract(' << args[0].to_s << ' FROM ' literal_append(sql, args[1]) sql << ')' else raise(InvalidOperation, "invalid operator #{op}") end end
Append literalization of constant to SQL string.
# File lib/sequel/dataset/sql.rb, line 627 def constant_sql_append(sql, constant) sql << constant.to_s end
Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.
# File lib/sequel/dataset/sql.rb, line 633 def delayed_evaluation_sql_append(sql, delay) # Delayed evaluations are used specifically so the SQL # can differ in subsequent calls, so we definitely don't # want to cache the sql in this case. disable_sql_caching! if recorder = @opts[:placeholder_literalizer] recorder.use(sql, lambda{delay.call(self)}, nil) else literal_append(sql, delay.call(self)) end end
Append literalization of function call to SQL string.
# File lib/sequel/dataset/sql.rb, line 647 def function_sql_append(sql, f) name = f.name opts = f.opts if opts[:emulate] if emulate_function?(name) emulate_function_sql_append(sql, f) return end name = native_function_name(name) end sql << 'LATERAL ' if opts[:lateral] case name when SQL::Identifier if supports_quoted_function_names? && opts[:quoted] literal_append(sql, name) else sql << name.value.to_s end when SQL::QualifiedIdentifier if supports_quoted_function_names? && opts[:quoted] != false literal_append(sql, name) else sql << split_qualifiers(name).join('.') end else if supports_quoted_function_names? && opts[:quoted] quote_identifier_append(sql, name) else sql << name.to_s end end sql << '(' if opts[:*] sql << '*' else sql << "DISTINCT " if opts[:distinct] expression_list_append(sql, f.args) if order = opts[:order] sql << " ORDER BY " expression_list_append(sql, order) end end sql << ')' if group = opts[:within_group] sql << " WITHIN GROUP (ORDER BY " expression_list_append(sql, group) sql << ')' end if filter = opts[:filter] sql << " FILTER (WHERE " literal_append(sql, filter_expr(filter, &opts[:filter_block])) sql << ')' end if window = opts[:over] sql << ' OVER ' window_sql_append(sql, window.opts) end if opts[:with_ordinality] sql << " WITH ORDINALITY" end end
Append literalization of JOIN clause without ON or USING to SQL string.
# File lib/sequel/dataset/sql.rb, line 719 def join_clause_sql_append(sql, jc) table = jc.table table_alias = jc.table_alias table_alias = nil if table == table_alias && !jc.column_aliases sql << ' ' << join_type_sql(jc.join_type) << ' ' identifier_append(sql, table) as_sql_append(sql, table_alias, jc.column_aliases) if table_alias end
Append literalization of negative boolean constant to SQL string.
# File lib/sequel/dataset/sql.rb, line 744 def negative_boolean_constant_sql_append(sql, constant) sql << 'NOT ' boolean_constant_sql_append(sql, constant) end
Append literalization of ordered expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 750 def ordered_expression_sql_append(sql, oe) literal_append(sql, oe.expression) sql << (oe.descending ? ' DESC' : ' ASC') case oe.nulls when :first sql << " NULLS FIRST" when :last sql << " NULLS LAST" end end
Append literalization of placeholder literal string to SQL string.
# File lib/sequel/dataset/sql.rb, line 762 def placeholder_literal_string_sql_append(sql, pls) args = pls.args str = pls.str sql << '(' if pls.parens if args.is_a?(Hash) if args.empty? sql << str else re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ loop do previous, q, str = str.partition(re) sql << previous literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? break if str.empty? end end elsif str.is_a?(Array) len = args.length str.each_with_index do |s, i| sql << s literal_append(sql, args[i]) unless i == len end unless str.length == args.length || str.length == args.length + 1 raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" end else i = -1 match_len = args.length - 1 loop do previous, q, str = str.partition('?') sql << previous literal_append(sql, args.at(i+=1)) unless q.empty? if str.empty? unless i == match_len raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder array" end break end end end sql << ')' if pls.parens end
Append literalization of qualified identifier to SQL string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
# File lib/sequel/dataset/sql.rb, line 808 def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) identifier_append(sql, table) sql << '.' identifier_append(sql, column) end
Append literalization of unqualified identifier to SQL string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 818 def quote_identifier_append(sql, name) if name.is_a?(LiteralString) sql << name else name = name.value if name.is_a?(SQL::Identifier) name = input_identifier(name) if quote_identifiers? quoted_identifier_append(sql, name) else sql << name end end end
Append literalization of identifier or unqualified identifier to SQL string.
# File lib/sequel/dataset/sql.rb, line 833 def quote_schema_table_append(sql, table) schema, table = schema_and_table(table) if schema quote_identifier_append(sql, schema) sql << '.' end quote_identifier_append(sql, table) end
Append literalization of quoted identifier to SQL string. This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 846 def quoted_identifier_append(sql, name) sql << '"' << name.to_s.gsub('"', '""') << '"' end
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).
# File lib/sequel/dataset/sql.rb, line 857 def schema_and_table(table_name, sch=nil) sch = sch.to_s if sch case table_name when Symbol s, t, _ = split_symbol(table_name) [s||sch, t] when SQL::QualifiedIdentifier [table_name.table.to_s, table_name.column.to_s] when SQL::Identifier [sch, table_name.value.to_s] when String [sch, table_name] else raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' end end
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s'] ds.split_qualifiers(:t__s) # ['t', 's'] ds.split_qualifiers(Sequel[:d][:t__s]) # ['d', 't', 's'] ds.split_qualifiers(Sequel[:h__d][:t__s]) # ['h', 'd', 't', 's']
# File lib/sequel/dataset/sql.rb, line 880 def split_qualifiers(table_name, *args) case table_name when SQL::QualifiedIdentifier split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) else sch, table = schema_and_table(table_name, *args) sch ? [sch, table] : [table] end end
Append literalization of subscripts (SQL array accesses) to SQL string.
# File lib/sequel/dataset/sql.rb, line 891 def subscript_sql_append(sql, s) literal_append(sql, s.f) sql << '[' if s.sub.length == 1 && (range = s.sub.first).is_a?(Range) literal_append(sql, range.begin) sql << ':' e = range.end e -= 1 if range.exclude_end? && e.is_a?(Integer) literal_append(sql, e) else expression_list_append(sql, s.sub) end sql << ']' end
Append literalization of windows (for window functions) to SQL string.
# File lib/sequel/dataset/sql.rb, line 907 def window_sql_append(sql, opts) raise(Error, 'This dataset does not support window functions') unless supports_window_functions? sql << '(' window, part, order, frame = opts.values_at(:window, :partition, :order, :frame) space = false space_s = ' ' if window literal_append(sql, window) space = true end if part sql << space_s if space sql << "PARTITION BY " expression_list_append(sql, Array(part)) space = true end if order sql << space_s if space sql << "ORDER BY " expression_list_append(sql, Array(order)) space = true end case frame when nil # nothing when :all sql << space_s if space sql << "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING" when :rows sql << space_s if space sql << "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW" when String sql << space_s if space sql << frame else raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil" end sql << ')' end
Return a #from_self dataset if an order or limit is specified, so it works as expected with UNION, EXCEPT, and INTERSECT clauses.
# File lib/sequel/dataset/sql.rb, line 951 def compound_from_self (@opts[:sql] || @opts[:limit] || @opts[:order] || @opts[:offset]) ? from_self : self end