add_named_conversion_proc(name, &block)
click to toggle source
Add a conversion proc for a named type. This should be used for types
without fixed OIDs, which includes all types that are not included in a
default PostgreSQL installation. If a block is given, it is used as the
conversion proc, otherwise the conversion proc is looked up in the
PG_NAMED_TYPES hash.
def add_named_conversion_proc(name, &block)
unless block
if block = PG_NAMED__TYPES[name]
Sequel::Deprecation.deprecate("Sequel::PG_NAMED_TYPES", "Call Database#add_named_conversion_proc directly for each database you want to support the #{name} type")
end
end
add_named_conversion_procs(conversion_procs, name=>block)
end
commit_prepared_transaction(transaction_id, opts=OPTS)
click to toggle source
Commit an existing prepared transaction with the given transaction
identifier string.
def commit_prepared_transaction(transaction_id, opts=OPTS)
run("COMMIT PREPARED #{literal(transaction_id)}", opts)
end
create_function(name, definition, opts=OPTS)
click to toggle source
Creates the function in the database. Arguments:
- name
-
name of the function to create
- definition
-
string definition of the function, or object file for a dynamically loaded
C function.
- opts
-
options hash:
- :args
-
function arguments, can be either a symbol or string specifying a type or
an array of 1-3 elements:
- 1
-
argument data type
- 2
-
argument name
- 3
-
argument mode (e.g. in, out, inout)
- :behavior
-
Should be IMMUTABLE, STABLE, or VOLATILE. PostgreSQL assumes VOLATILE by
default.
- :cost
-
The estimated cost of the function, used by the query planner.
- :language
-
The language the function uses. SQL is the
default.
- :link_symbol
-
For a dynamically loaded see function, the function’s link symbol if
different from the definition argument.
- :returns
-
The data type returned by the function. If you are using OUT or INOUT
argument modes, this is ignored. Otherwise, if this is not specified, void
is used by default to specify the function is not supposed to return a
value.
- :rows
-
The estimated number of rows the function will return. Only use if the
function returns SETOF something.
- :security_definer
-
Makes the privileges of the function the same as the privileges of the user
who defined the function instead of the privileges of the user who runs the
function. There are security implications when doing this, see the
PostgreSQL documentation.
- :set
-
Configuration variables to set while the function is being run, can be a
hash or an array of two pairs. search_path is often used here if
:security_definer is used.
- :strict
-
Makes the function return NULL when any argument is NULL.
def create_function(name, definition, opts=OPTS)
self << create_function_sql(name, definition, opts)
end
create_language(name, opts=OPTS)
click to toggle source
Create the procedural language in the database. Arguments:
- name
-
Name of the procedural language (e.g. plpgsql)
- opts
-
options hash:
- :handler
-
The name of a previously registered function used as a call handler for
this language.
- :replace
-
Replace the installed language if it already exists (on PostgreSQL 9.0+).
- :trusted
-
Marks the language being created as trusted, allowing unprivileged users to
create functions using this language.
- :validator
-
The name of previously registered function used as a validator of functions
defined in this language.
def create_language(name, opts=OPTS)
self << create_language_sql(name, opts)
end
create_schema(name, opts=OPTS)
click to toggle source
Create a schema in the database. Arguments:
- name
-
Name of the schema (e.g. admin)
- opts
-
options hash:
- :if_not_exists
-
Don’t raise an error if the schema already exists (PostgreSQL 9.3+)
- :owner
-
The owner to set for the schema (defaults to current user if not specified)
def create_schema(name, opts=OPTS)
self << create_schema_sql(name, opts)
end
create_trigger(table, name, function, opts=OPTS)
click to toggle source
Create a trigger in the database. Arguments:
- table
-
the table on which this trigger operates
- name
-
the name of this trigger
- function
-
the function to call for this trigger, which should return type trigger.
- opts
-
options hash:
- :after
-
Calls the trigger after execution instead of before.
- :args
-
An argument or array of arguments to pass to the function.
- :each_row
-
Calls the trigger for each row instead of for each statement.
- :events
-
Can be :insert, :update, :delete, or an array of any of those. Calls the
trigger whenever that type of statement is used. By default, the trigger
is called for insert, update, or delete.
- :when
-
A filter to use for the trigger
def create_trigger(table, name, function, opts=OPTS)
self << create_trigger_sql(table, name, function, opts)
end
database_type()
click to toggle source
PostgreSQL uses the :postgres database type.
def database_type
:postgres
end
do(code, opts=OPTS)
click to toggle source
Use PostgreSQL’s DO syntax to execute an anonymous code block. The code
should be the literal code string to use in the underlying procedural
language. Options:
- :language
-
The procedural language the code is written in. The PostgreSQL default is
plpgsql. Can be specified as a string or a symbol.
def do(code, opts=OPTS)
language = opts[:language]
run "DO #{"LANGUAGE #{literal(language.to_s)} " if language}#{literal(code)}"
end
drop_function(name, opts=OPTS)
click to toggle source
Drops the function from the database. Arguments:
- name
-
name of the function to drop
- opts
-
options hash:
- :args
-
The arguments for the function. See create_function_sql.
- :cascade
-
Drop other objects depending on this function.
- :if_exists
-
Don’t raise an error if the function doesn’t exist.
def drop_function(name, opts=OPTS)
self << drop_function_sql(name, opts)
end
drop_language(name, opts=OPTS)
click to toggle source
Drops a procedural language from the database. Arguments:
- name
-
name of the procedural language to drop
- opts
-
options hash:
- :cascade
-
Drop other objects depending on this function.
- :if_exists
-
Don’t raise an error if the function doesn’t exist.
def drop_language(name, opts=OPTS)
self << drop_language_sql(name, opts)
end
drop_schema(name, opts=OPTS)
click to toggle source
Drops a schema from the database. Arguments:
- name
-
name of the schema to drop
- opts
-
options hash:
- :cascade
-
Drop all objects in this schema.
- :if_exists
-
Don’t raise an error if the schema doesn’t exist.
def drop_schema(name, opts=OPTS)
self << drop_schema_sql(name, opts)
end
drop_trigger(table, name, opts=OPTS)
click to toggle source
Drops a trigger from the database. Arguments:
- table
-
table from which to drop the trigger
- name
-
name of the trigger to drop
- opts
-
options hash:
- :cascade
-
Drop other objects depending on this function.
- :if_exists
-
Don’t raise an error if the function doesn’t exist.
def drop_trigger(table, name, opts=OPTS)
self << drop_trigger_sql(table, name, opts)
end
foreign_key_list(table, opts=OPTS)
click to toggle source
Return full foreign key information using the pg system tables, including
:name, :on_delete, :on_update, and :deferrable entries in the hashes.
def foreign_key_list(table, opts=OPTS)
m = output_identifier_meth
schema, _ = opts.fetch(:schema, schema_and_table(table))
range = 0...32
oid = regclass_oid(table)
base_ds = metadata_dataset.
from{pg_constraint.as(:co)}.
join(Sequel[:pg_class].as(:cl), :oid=>:conrelid).
where{{
cl[:relkind]=>'r',
co[:contype]=>'f',
cl[:oid]=>oid}}
ds = base_ds.
join(Sequel[:pg_attribute].as(:att), :attrelid=>:oid, :attnum=>SQL::Function.new(:ANY, Sequel[:co][:conkey])).
order{[
co[:conname],
SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(co[:conkey], [x]), x]}, 32, att[:attnum])]}.
select{[
co[:conname].as(:name),
att[:attname].as(:column),
co[:confupdtype].as(:on_update),
co[:confdeltype].as(:on_delete),
SQL::BooleanExpression.new(:AND, co[:condeferrable], co[:condeferred]).as(:deferrable)]}
ref_ds = base_ds.
join(Sequel[:pg_class].as(:cl2), :oid=>Sequel[:co][:confrelid]).
join(Sequel[:pg_attribute].as(:att2), :attrelid=>:oid, :attnum=>SQL::Function.new(:ANY, Sequel[:co][:confkey])).
order{[
co[:conname],
SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(co[:confkey], [x]), x]}, 32, att2[:attnum])]}.
select{[
co[:conname].as(:name),
cl2[:relname].as(:table),
att2[:attname].as(:refcolumn)]}
if schema
ref_ds = ref_ds.join(Sequel[:pg_namespace].as(:nsp2), :oid=>Sequel[:cl2][:relnamespace]).
select_append{nsp2[:nspname].as(:schema)}
end
h = {}
fklod_map = FOREIGN_KEY_LIST_ON_DELETE_MAP
ds.each do |row|
if r = h[row[:name]]
r[:columns] << m.call(row[:column])
else
h[row[:name]] = {:name=>m.call(row[:name]), :columns=>[m.call(row[:column])], :on_update=>fklod_map[row[:on_update]], :on_delete=>fklod_map[row[:on_delete]], :deferrable=>row[:deferrable]}
end
end
ref_ds.each do |row|
r = h[row[:name]]
r[:table] ||= schema ? SQL::QualifiedIdentifier.new(m.call(row[:schema]), m.call(row[:table])) : m.call(row[:table])
r[:key] ||= []
r[:key] << m.call(row[:refcolumn])
end
h.values
end
freeze()
click to toggle source
def freeze
server_version
supports_prepared_transactions?
@conversion_procs.freeze
super
end
indexes(table, opts=OPTS)
click to toggle source
Use the pg_* system tables to determine indexes on a table
def indexes(table, opts=OPTS)
m = output_identifier_meth
range = 0...32
attnums = server_version >= 80100 ? SQL::Function.new(:ANY, Sequel[:ind][:indkey]) : range.map{|x| SQL::Subscript.new(Sequel[:ind][:indkey], [x])}
oid = regclass_oid(table, opts)
ds = metadata_dataset.
from{pg_class.as(:tab)}.
join(Sequel[:pg_index].as(:ind), :indrelid=>:oid).
join(Sequel[:pg_class].as(:indc), :oid=>:indexrelid).
join(Sequel[:pg_attribute].as(:att), :attrelid=>Sequel[:tab][:oid], :attnum=>attnums).
left_join(Sequel[:pg_constraint].as(:con), :conname=>Sequel[:indc][:relname]).
where{{
indc[:relkind]=>'i',
ind[:indisprimary]=>false,
:indexprs=>nil,
:indpred=>nil,
:indisvalid=>true,
tab[:oid]=>oid}}.
order{[indc[:relname], SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(ind[:indkey], [x]), x]}, 32, att[:attnum])]}.
select{[indc[:relname].as(:name), ind[:indisunique].as(:unique), att[:attname].as(:column), con[:condeferrable].as(:deferrable)]}
ds = ds.where(:indisready=>true, :indcheckxmin=>false) if server_version >= 80300
indexes = {}
ds.each do |r|
i = indexes[m.call(r[:name])] ||= {:columns=>[], :unique=>r[:unique], :deferrable=>r[:deferrable]}
i[:columns] << m.call(r[:column])
end
indexes
end
locks()
click to toggle source
Dataset containing all current database locks
def locks
dataset.from(:pg_class).join(:pg_locks, :relation=>:relfilenode).select{[pg_class[:relname], Sequel::SQL::ColumnAll.new(:pg_locks)]}
end
notify(channel, opts=OPTS)
click to toggle source
Notifies the given channel. See the PostgreSQL NOTIFY documentation.
Options:
- :payload
-
The payload string to use for the NOTIFY statement. Only supported in
PostgreSQL 9.0+.
- :server
-
The server to which to send the NOTIFY statement, if the sharding support
is being used.
def notify(channel, opts=OPTS)
sql = String.new
sql << "NOTIFY "
dataset.send(:identifier_append, sql, channel)
if payload = opts[:payload]
sql << ", "
dataset.literal_append(sql, payload.to_s)
end
execute_ddl(sql, opts)
end
primary_key(table, opts=OPTS)
click to toggle source
Return primary key for the given table.
def primary_key(table, opts=OPTS)
quoted_table = quote_schema_table(table)
Sequel.synchronize{return @primary_keys[quoted_table] if @primary_keys.has_key?(quoted_table)}
sql = "#{SELECT_PK_SQL} AND pg_class.oid = #{literal(regclass_oid(table, opts))}"
value = fetch(sql).single_value
Sequel.synchronize{@primary_keys[quoted_table] = value}
end
primary_key_sequence(table, opts=OPTS)
click to toggle source
Return the sequence providing the default for the primary key for the given
table.
def primary_key_sequence(table, opts=OPTS)
quoted_table = quote_schema_table(table)
Sequel.synchronize{return @primary_key_sequences[quoted_table] if @primary_key_sequences.has_key?(quoted_table)}
sql = "#{SELECT_SERIAL_SEQUENCE_SQL} AND t.oid = #{literal(regclass_oid(table, opts))}"
if pks = fetch(sql).single_record
value = literal(SQL::QualifiedIdentifier.new(pks[:schema], pks[:sequence]))
Sequel.synchronize{@primary_key_sequences[quoted_table] = value}
else
sql = "#{SELECT_CUSTOM_SEQUENCE_SQL} AND t.oid = #{literal(regclass_oid(table, opts))}"
if pks = fetch(sql).single_record
value = literal(SQL::QualifiedIdentifier.new(pks[:schema], LiteralString.new(pks[:sequence])))
Sequel.synchronize{@primary_key_sequences[quoted_table] = value}
end
end
end
refresh_view(name, opts=OPTS)
click to toggle source
Refresh the materialized view with the given name.
DB.refresh_view(:items_view)
DB.refresh_view(:items_view, :concurrently=>true)
def refresh_view(name, opts=OPTS)
run "REFRESH MATERIALIZED VIEW#{' CONCURRENTLY' if opts[:concurrently]} #{quote_schema_table(name)}"
end
reset_conversion_procs()
click to toggle source
Reset the database’s conversion procs, requires a server query if there
any named types.
def reset_conversion_procs
@conversion_procs = get_conversion_procs
conversion_procs_updated
@conversion_procs
end
reset_primary_key_sequence(table)
click to toggle source
Reset the primary key sequence for the given table, basing it on the
maximum current value of the table’s primary key.
def reset_primary_key_sequence(table)
return unless seq = primary_key_sequence(table)
pk = SQL::Identifier.new(primary_key(table))
db = self
seq_ds = db.from(LiteralString.new(seq))
s, t = schema_and_table(table)
table = Sequel.qualify(s, t) if s
get{setval(seq, db[table].select{coalesce(max(pk)+seq_ds.select{:increment_by}, seq_ds.select(:min_value))}, false)}
end
rollback_prepared_transaction(transaction_id, opts=OPTS)
click to toggle source
Rollback an existing prepared transaction with the given transaction
identifier string.
def rollback_prepared_transaction(transaction_id, opts=OPTS)
run("ROLLBACK PREPARED #{literal(transaction_id)}", opts)
end
serial_primary_key_options()
click to toggle source
PostgreSQL uses SERIAL psuedo-type instead of AUTOINCREMENT for managing
incrementing primary keys.
def serial_primary_key_options
{:primary_key => true, :serial => true, :type=>Integer}
end
server_version(server=nil)
click to toggle source
The version of the PostgreSQL server, used for determining capability.
def server_version(server=nil)
return @server_version if @server_version
@server_version = synchronize(server) do |conn|
(conn.server_version rescue nil) if conn.respond_to?(:server_version)
end
unless @server_version
@server_version = if m = /PostgreSQL (\d+)\.(\d+)(?:(?:rc\d+)|\.(\d+))?/.match(fetch('SELECT version()').single_value)
(m[1].to_i * 10000) + (m[2].to_i * 100) + m[3].to_i
else
0
end
end
Sequel::Deprecation.deprecate('Sequel no longer supports PostgreSQL <8.2, some things may not work.') if @server_version < 80200
@server_version
end
supports_create_table_if_not_exists?()
click to toggle source
PostgreSQL supports CREATE TABLE IF NOT EXISTS on 9.1+
def supports_create_table_if_not_exists?
server_version >= 90100
end
supports_deferrable_constraints?()
click to toggle source
PostgreSQL 9.0+ supports some types of deferrable constraints beyond
foreign key constraints.
def supports_deferrable_constraints?
server_version >= 90000
end
supports_deferrable_foreign_key_constraints?()
click to toggle source
PostgreSQL supports deferrable foreign key constraints.
def supports_deferrable_foreign_key_constraints?
true
end
supports_drop_table_if_exists?()
click to toggle source
PostgreSQL supports DROP TABLE IF EXISTS
def supports_drop_table_if_exists?
true
end
supports_partial_indexes?()
click to toggle source
PostgreSQL supports partial indexes.
def supports_partial_indexes?
true
end
supports_prepared_transactions?()
click to toggle source
PostgreSQL supports prepared transactions (two-phase commit) if
max_prepared_transactions is greater than 0.
def supports_prepared_transactions?
return @supports_prepared_transactions if defined?(@supports_prepared_transactions)
@supports_prepared_transactions = self['SHOW max_prepared_transactions'].get.to_i > 0
end
supports_savepoints?()
click to toggle source
PostgreSQL supports savepoints
def supports_savepoints?
true
end
supports_transaction_isolation_levels?()
click to toggle source
PostgreSQL supports transaction isolation levels
def supports_transaction_isolation_levels?
true
end
supports_transactional_ddl?()
click to toggle source
PostgreSQL supports transaction DDL statements.
def supports_transactional_ddl?
true
end
supports_trigger_conditions?()
click to toggle source
PostgreSQL 9.0+ supports trigger conditions.
def supports_trigger_conditions?
server_version >= 90000
end
tables(opts=OPTS, &block)
click to toggle source
Array of symbols specifying table names in
the current database. The dataset used is yielded to the block if one is
provided, otherwise, an array of symbols of table names is returned.
Options:
- :qualify
-
Return the tables as Sequel::SQL::QualifiedIdentifier
instances, using the schema the table is located in as the qualifier.
- :schema
-
The schema to search
- :server
-
The server to use
def tables(opts=OPTS, &block)
pg_class_relname('r', opts, &block)
end
type_supported?(type)
click to toggle source
Check whether the given type name string/symbol (e.g. :hstore) is supported
by the database.
def type_supported?(type)
Sequel.synchronize{return @supported_types[type] if @supported_types.has_key?(type)}
supported = from(:pg_type).where(:typtype=>'b', :typname=>type.to_s).count > 0
Sequel.synchronize{return @supported_types[type] = supported}
end
values(v)
click to toggle source
Creates a dataset that uses the VALUES clause:
DB.values([[1, 2], [3, 4]])
VALUES ((1, 2), (3, 4))
DB.values([[1, 2], [3, 4]]).order(:column2).limit(1, 1)
VALUES ((1, 2), (3, 4)) ORDER BY column2 LIMIT 1 OFFSET 1
def values(v)
@default_dataset.clone(:values=>v)
end
views(opts=OPTS)
click to toggle source
Array of symbols specifying view names in
the current database.
Options:
- :materialized
-
Return materialized views
- :qualify
-
Return the views as Sequel::SQL::QualifiedIdentifier
instances, using the schema the view is located in as the qualifier.
- :schema
-
The schema to search
- :server
-
The server to use
def views(opts=OPTS)
relkind = opts[:materialized] ? 'm' : 'v'
pg_class_relname(relkind, opts)
end