User-defined functions can be written in C (or a language that can
be made compatible with C, such as C++). Such functions are
compiled into dynamically loadable objects (also called shared
libraries) and are loaded by the server on demand. The dynamic
loading feature is what distinguishes "C language" functions
from "internal" functions --- the actual coding conventions
are essentially the same for both. (Hence, the standard internal
function library is a rich source of coding examples for user-defined
C functions.)
Two different calling conventions are currently used for C functions.
The newer "version 1" calling convention is indicated by writing
a PG_FUNCTION_INFO_V1() macro call for the function,
as illustrated below. Lack of such a macro indicates an old-style
("version 0") function. The language name specified in CREATE FUNCTION
is C in either case. Old-style functions are now deprecated
because of portability problems and lack of functionality, but they
are still supported for compatibility reasons.
The first time a user-defined function in a particular
loadable object file is called in a backend session,
the dynamic loader loads that object file into memory so that the
function can be called. The CREATE FUNCTION
for a user-defined C function must therefore specify two pieces of
information for the function: the name of the loadable
object file, and the C name (link symbol) of the specific function to call
within that object file. If the C name is not explicitly specified then
it is assumed to be the same as the SQL function name.
The following algorithm is used to locate the shared object file
based on the name given in the CREATE FUNCTION
command:
If the name is an absolute path, the given file is loaded.
If the name starts with the string $libdir,
that part is replaced by the PostgreSQL package
library directory
name, which is determined at build time.
If the name does not contain a directory part, the file is
searched for in the path specified by the configuration variable
dynamic_library_path.
Otherwise (the file was not found in the path, or it contains a
non-absolute directory part), the dynamic loader will try to
take the name as given, which will most likely fail. (It is
unreliable to depend on the current working directory.)
If this sequence does not work, the platform-specific shared
library file name extension (often .so) is
appended to the given name and this sequence is tried again. If
that fails as well, the load will fail.
Note: The user ID the PostgreSQL server runs
as must be able to traverse the path to the file you intend to
load. Making the file or a higher-level directory not readable
and/or not executable by the postgres user is a
common mistake.
In any case, the file name that is given in the
CREATE FUNCTION command is recorded literally
in the system catalogs, so if the file needs to be loaded again
the same procedure is applied.
Note: PostgreSQL will not compile a C function
automatically. The object file must be compiled before it is referenced
in a CREATE
FUNCTION command. See Section 9.5.8 for additional
information.
Note: After it is used for the first time, a dynamically loaded object
file is retained in memory. Future calls in the same session to the
function(s) in that file will only incur the small overhead of a symbol
table lookup. If you need to force a reload of an object file, for
example after recompiling it, use the LOAD command or
begin a fresh session.
It is recommended to locate shared libraries either relative to
$libdir or through the dynamic library path.
This simplifies version upgrades if the new installation is at a
different location. The actual directory that
$libdir stands for can be found out with the
command pg_config --pkglibdir.
Note: Before PostgreSQL release 7.2, only exact
absolute paths to object files could be specified in CREATE
FUNCTION. This approach is now deprecated since it makes the
function definition unnecessarily unportable. It's best to specify
just the shared library name with no path nor extension, and let
the search mechanism provide that information instead.
Table 9-1 gives the C type required for
parameters in the C functions that will be loaded into
PostgreSQL.
The "Defined In" column gives the header file that
needs to be included to get the type definition. (The actual
definition may be in a different file that is included by the
listed file. It is recommended that users stick to the defined
interface.) Note that you should always include
postgres.h first in any source file, because
it declares a number of things that you will need anyway.
Table 9-1. Equivalent C Types
for Built-In PostgreSQL Types
SQL Type
C Type
Defined In
abstime
AbsoluteTime
utils/nabstime.h
boolean
bool
postgres.h (maybe compiler built-in)
box
BOX*
utils/geo_decls.h
bytea
bytea*
postgres.h
"char"
char
(compiler built-in)
character
BpChar*
postgres.h
cid
CommandId
postgres.h
date
DateADT
utils/date.h
smallint (int2)
int2 or int16
postgres.h
int2vector
int2vector*
postgres.h
integer (int4)
int4 or int32
postgres.h
real (float4)
float4*
postgres.h
double precision (float8)
float8*
postgres.h
interval
Interval*
utils/timestamp.h
lseg
LSEG*
utils/geo_decls.h
name
Name
postgres.h
oid
Oid
postgres.h
oidvector
oidvector*
postgres.h
path
PATH*
utils/geo_decls.h
point
POINT*
utils/geo_decls.h
regproc
regproc
postgres.h
reltime
RelativeTime
utils/nabstime.h
text
text*
postgres.h
tid
ItemPointer
storage/itemptr.h
time
TimeADT
utils/date.h
time with time zone
TimeTzADT
utils/date.h
timestamp
Timestamp*
utils/timestamp.h
tinterval
TimeInterval
utils/nabstime.h
varchar
VarChar*
postgres.h
xid
TransactionId
postgres.h
Internally, PostgreSQL regards a
base type as a "blob of memory". The user-defined
functions that you define over a type in turn define the
way that PostgreSQL can operate
on it. That is, PostgreSQL will
only store and retrieve the data from disk and use your
user-defined functions to input, process, and output the data.
Base types can have one of three internal formats:
pass by value, fixed-length
pass by reference, fixed-length
pass by reference, variable-length
By-value types can only be 1, 2 or 4 bytes in length
(also 8 bytes, if sizeof(Datum) is 8 on your machine).
You should be careful
to define your types such that they will be the same
size (in bytes) on all architectures. For example, the
long type is dangerous because it
is 4 bytes on some machines and 8 bytes on others, whereas
int type is 4 bytes on most
Unix machines. A reasonable implementation of
the int4 type on Unix
machines might be:
/* 4-byte integer, passed by value */
typedef int int4;
PostgreSQL automatically figures
things out so that the integer types really have the size they
advertise.
On the other hand, fixed-length types of any size may
be passed by-reference. For example, here is a sample
implementation of a PostgreSQL type:
Only pointers to such types can be used when passing
them in and out of PostgreSQL functions.
To return a value of such a type, allocate the right amount of
memory with palloc(), fill in the allocated memory,
and return a pointer to it. (Alternatively, you can return an input
value of the same type by returning its pointer. Never
modify the contents of a pass-by-reference input value, however.)
Finally, all variable-length types must also be passed
by reference. All variable-length types must begin
with a length field of exactly 4 bytes, and all data to
be stored within that type must be located in the memory
immediately following that length field. The
length field is the total length of the structure
(i.e., it includes the size of the length field
itself). We can define the text type as follows:
Obviously, the data field declared here is not long enough to hold
all possible strings. Since it's impossible to declare a variable-size
structure in C, we rely on the knowledge that the
C compiler won't range-check array subscripts. We
just allocate the necessary amount of space and then access the array as
if it were declared the right length. (If this isn't a familiar trick to
you, you may wish to spend some time with an introductory
C programming textbook before delving deeper into
PostgreSQL server programming.)
When manipulating
variable-length types, we must be careful to allocate
the correct amount of memory and set the length field correctly.
For example, if we wanted to store 40 bytes in a text
structure, we might use a code fragment like this:
VARHDRSZ is the same as sizeof(int4), but
it's considered good style to use the macro VARHDRSZ
to refer to the size of the overhead for a variable-length type.
Now that we've gone over all of the possible structures
for base types, we can show some examples of real functions.
We present the "old style" calling convention first --- although
this approach is now deprecated, it's easier to get a handle on
initially. In the version-0 method, the arguments and result
of the C function are just declared in normal C style, but being
careful to use the C representation of each SQL data type as shown
above.
Here are some examples:
#include "postgres.h"
#include <string.h>
/* By Value */
int
add_one(int arg)
{
return arg + 1;
}
/* By Reference, Fixed Length */
float8 *
add_one_float8(float8 *arg)
{
float8 *result = (float8 *) palloc(sizeof(float8));
*result = *arg + 1.0;
return result;
}
Point *
makepoint(Point *pointx, Point *pointy)
{
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
return new_point;
}
/* By Reference, Variable Length */
text *
copytext(text *t)
{
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
* VARDATA is a pointer to the data region of the struct.
*/
memcpy((void *) VARDATA(new_t), /* destination */
(void *) VARDATA(t), /* source */
VARSIZE(t)-VARHDRSZ); /* how many bytes */
return new_t;
}
text *
concat_text(text *arg1, text *arg2)
{
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
return new_text;
}
Supposing that the above code has been prepared in file
funcs.c and compiled into a shared object,
we could define the functions to PostgreSQL
with commands like this:
CREATE FUNCTION add_one(int4) RETURNS int4
AS 'PGROOT/tutorial/funcs' LANGUAGE C
WITH (isStrict);
-- note overloading of SQL function name add_one()
CREATE FUNCTION add_one(float8) RETURNS float8
AS 'PGROOT/tutorial/funcs',
'add_one_float8'
LANGUAGE C WITH (isStrict);
CREATE FUNCTION makepoint(point, point) RETURNS point
AS 'PGROOT/tutorial/funcs' LANGUAGE C
WITH (isStrict);
CREATE FUNCTION copytext(text) RETURNS text
AS 'PGROOT/tutorial/funcs' LANGUAGE C
WITH (isStrict);
CREATE FUNCTION concat_text(text, text) RETURNS text
AS 'PGROOT/tutorial/funcs' LANGUAGE C
WITH (isStrict);
Here PGROOT stands for the full path to
the PostgreSQL source tree. (Better style would
be to use just 'funcs' in the AS clause,
after having added PGROOT/tutorial
to the search path. In any case, we may omit the system-specific
extension for a shared library, commonly .so or
.sl.)
Notice that we have specified the functions as "strict",
meaning that
the system should automatically assume a NULL result if any input
value is NULL. By doing this, we avoid having to check for NULL inputs
in the function code. Without this, we'd have to check for null values
explicitly, for example by checking for a null pointer for each
pass-by-reference argument. (For pass-by-value arguments, we don't
even have a way to check!)
Although this calling convention is simple to use,
it is not very portable; on some architectures there are problems
with passing smaller-than-int data types this way. Also, there is
no simple way to return a NULL result, nor to cope with NULL arguments
in any way other than making the function strict. The version-1
convention, presented next, overcomes these objections.
The version-1 calling convention relies on macros to suppress most
of the complexity of passing arguments and results. The C declaration
of a version-1 function is always
Datum funcname(PG_FUNCTION_ARGS)
In addition, the macro call
PG_FUNCTION_INFO_V1(funcname);
must appear in the same source file (conventionally it's written
just before the function itself). This macro call is not needed
for internal-language functions, since
PostgreSQL currently
assumes all internal functions are version-1. However, it is
required for dynamically-loaded functions.
In a version-1 function, each actual argument is fetched using a
PG_GETARG_xxx()
macro that corresponds to the argument's data type, and the result
is returned using a
PG_RETURN_xxx()
macro for the return type.
Here we show the same functions as above, coded in version-1 style:
#include "postgres.h"
#include <string.h>
#include "fmgr.h"
/* By Value */
PG_FUNCTION_INFO_V1(add_one);
Datum
add_one(PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32(0);
PG_RETURN_INT32(arg + 1);
}
/* By Reference, Fixed Length */
PG_FUNCTION_INFO_V1(add_one_float8);
Datum
add_one_float8(PG_FUNCTION_ARGS)
{
/* The macros for FLOAT8 hide its pass-by-reference nature */
float8 arg = PG_GETARG_FLOAT8(0);
PG_RETURN_FLOAT8(arg + 1.0);
}
PG_FUNCTION_INFO_V1(makepoint);
Datum
makepoint(PG_FUNCTION_ARGS)
{
/* Here, the pass-by-reference nature of Point is not hidden */
Point *pointx = PG_GETARG_POINT_P(0);
Point *pointy = PG_GETARG_POINT_P(1);
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
PG_RETURN_POINT_P(new_point);
}
/* By Reference, Variable Length */
PG_FUNCTION_INFO_V1(copytext);
Datum
copytext(PG_FUNCTION_ARGS)
{
text *t = PG_GETARG_TEXT_P(0);
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
* VARDATA is a pointer to the data region of the struct.
*/
memcpy((void *) VARDATA(new_t), /* destination */
(void *) VARDATA(t), /* source */
VARSIZE(t)-VARHDRSZ); /* how many bytes */
PG_RETURN_TEXT_P(new_t);
}
PG_FUNCTION_INFO_V1(concat_text);
Datum
concat_text(PG_FUNCTION_ARGS)
{
text *arg1 = PG_GETARG_TEXT_P(0);
text *arg2 = PG_GETARG_TEXT_P(1);
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
PG_RETURN_TEXT_P(new_text);
}
The CREATE FUNCTION commands are the same as
for the version-0 equivalents.
At first glance, the version-1 coding conventions may appear to
be just pointless obscurantism. However, they do offer a number
of improvements, because the macros can hide unnecessary detail.
An example is that in coding add_one_float8, we no longer need to
be aware that float8 is a pass-by-reference type. Another
example is that the GETARG macros for variable-length types hide
the need to deal with fetching "toasted" (compressed or
out-of-line) values. The old-style copytext
and concat_text functions shown above are
actually wrong in the presence of toasted values, because they
don't call pg_detoast_datum() on their
inputs. (The handler for old-style dynamically-loaded functions
currently takes care of this detail, but it does so less
efficiently than is possible for a version-1 function.)
One big improvement in version-1 functions is better handling of NULL
inputs and results. The macro PG_ARGISNULL(n)
allows a function to test whether each input is NULL (of course, doing
this is only necessary in functions not declared "strict").
As with the
PG_GETARG_xxx() macros,
the input arguments are counted beginning at zero. Note that one
should refrain from executing
PG_GETARG_xxx() until
one has verified that the argument isn't NULL.
To return a NULL result, execute PG_RETURN_NULL();
this works in both strict and nonstrict functions.
Other options provided in the new-style interface are two
variants of the
PG_GETARG_xxx()
macros. The first of these,
PG_GETARG_xxx_COPY()
guarantees to return a copy of the specified parameter which is
safe for writing into. (The normal macros will sometimes return a
pointer to a value that is physically stored in a table, and so
must not be written to. Using the
PG_GETARG_xxx_COPY()
macros guarantees a writable result.)
The second variant consists of the
PG_GETARG_xxx_SLICE()
macros which take three parameters. The first is the number of the
parameter (as above). The second and third are the offset and
length of the segment to be returned. Offsets are counted from
zero, and a negative length requests that the remainder of the
value be returned. These routines provide more efficient access to
parts of large values in the case where they have storage type
"external". (The storage type of a column can be specified using
ALTER TABLE tablename ALTER
COLUMN colname SET STORAGE
storagetype. Storage type is one of
plain, external, extended,
or main.)
The version-1 function call conventions make it possible to
return "set" results and implement trigger functions and
procedural-language call handlers. Version-1 code is also more
portable than version-0, because it does not break ANSI C restrictions
on function call protocol. For more details see
src/backend/utils/fmgr/README in the source
distribution.
Composite types do not have a fixed layout like C
structures. Instances of a composite type may contain
null fields. In addition, composite types that are
part of an inheritance hierarchy may have different
fields than other members of the same inheritance hierarchy.
Therefore, PostgreSQL provides
a procedural interface for accessing fields of composite types
from C. As PostgreSQL processes
a set of rows, each row will be passed into your
function as an opaque structure of type TUPLE.
Suppose we want to write a function to answer the query
SELECT name, c_overpaid(emp, 1500) AS overpaid
FROM emp
WHERE name = 'Bill' OR name = 'Sam';
In the query above, we can define c_overpaid as:
#include "postgres.h"
#include "executor/executor.h" /* for GetAttributeByName() */
bool
c_overpaid(TupleTableSlot *t, /* the current row of EMP */
int32 limit)
{
bool isnull;
int32 salary;
salary = DatumGetInt32(GetAttributeByName(t, "salary", &isnull));
if (isnull)
return (false);
return salary > limit;
}
/* In version-1 coding, the above would look like this: */
PG_FUNCTION_INFO_V1(c_overpaid);
Datum
c_overpaid(PG_FUNCTION_ARGS)
{
TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);
int32 limit = PG_GETARG_INT32(1);
bool isnull;
int32 salary;
salary = DatumGetInt32(GetAttributeByName(t, "salary", &isnull));
if (isnull)
PG_RETURN_BOOL(false);
/* Alternatively, we might prefer to do PG_RETURN_NULL() for null salary */
PG_RETURN_BOOL(salary > limit);
}
GetAttributeByName is the
PostgreSQL system function that
returns attributes out of the current row. It has
three arguments: the argument of type TupleTableSlot* passed into
the function, the name of the desired attribute, and a
return parameter that tells whether the attribute
is null. GetAttributeByName returns a Datum
value that you can convert to the proper data type by using the
appropriate DatumGetXXX() macro.
The following command lets PostgreSQL
know about the c_overpaid function:
CREATE FUNCTION c_overpaid(emp, int4)
RETURNS bool
AS 'PGROOT/tutorial/funcs'
LANGUAGE C;
The Table Function API assists in the creation of user-defined
C language table functions (Section 9.7).
Table functions are functions that produce a set of rows, made up of
either base (scalar) data types, or composite (multi-column) data types.
The API is split into two main components: support for returning
composite data types, and support for returning multiple rows
(set returning functions or SRFs).
The Table Function API relies on macros and functions to suppress most
of the complexity of building composite data types and returning multiple
results. A table function must follow the version-1 calling convention
described above. In addition, the source file must include:
The Table Function API support for returning composite data types
(or rows) starts with the AttInMetadata
structure. This structure holds arrays of individual attribute
information needed to create a row from raw C strings. It also
saves a pointer to the TupleDesc. The information
carried here is derived from the TupleDesc, but it
is stored here to avoid redundant CPU cycles on each call to a
table function. In the case of a function returning a set, the
AttInMetadata structure should be computed
once during the first call and saved for re-use in later calls.
typedef struct AttInMetadata
{
/* full TupleDesc */
TupleDesc tupdesc;
/* array of attribute type input function finfo */
FmgrInfo *attinfuncs;
/* array of attribute type typelem */
Oid *attelems;
/* array of attribute typmod */
int32 *atttypmods;
} AttInMetadata;
To assist you in populating this structure, several functions and a macro
are available. Use
will return a pointer to an AttInMetadata,
initialized based on the given
TupleDesc. AttInMetadata can be
used in conjunction with C strings to produce a properly formed
tuple. The metadata is stored here to avoid redundant work across
multiple calls.
To return a tuple you must create a tuple slot based on the
TupleDesc. You can use
to initialize this tuple slot, or obtain one through other (user provided)
means. The tuple slot is needed to create a Datum for return by the
function. The same slot can (and should) be re-used on each call.
can be used to build a HeapTuple given user data
in C string form. "values" is an array of C strings, one for
each attribute of the return tuple. Each C string should be in
the form expected by the input function of the attribute data
type. In order to return a null value for one of the attributes,
the corresponding pointer in the values array
should be set to NULL. This function will need to
be called again for each tuple you return.
Building a tuple via TupleDescGetAttInMetadata and
BuildTupleFromCStrings is only convenient if your
function naturally computes the values to be returned as text
strings. If your code naturally computes the values as a set of
Datums, you should instead use the underlying
heap_formtuple routine to convert the
Datums directly into a tuple. You will still need
the TupleDesc and a TupleTableSlot,
but not AttInMetadata.
Once you have built a tuple to return from your function, the tuple must
be converted into a Datum. Use
to get a Datum given a tuple and a slot. This
Datum can be returned directly if you intend to return
just a single row, or it can be used as the current return value
in a set-returning function.
A set-returning function (SRF) is normally called
once for each item it returns. The SRF must
therefore save enough state to remember what it was doing and
return the next item on each call. The Table Function API
provides the FuncCallContext structure to help
control this process. fcinfo->flinfo->fn_extra
is used to hold a pointer to FuncCallContext
across calls.
typedef struct
{
/*
* Number of times we've been called before.
*
* call_cntr is initialized to 0 for you by SRF_FIRSTCALL_INIT(), and
* incremented for you every time SRF_RETURN_NEXT() is called.
*/
uint32 call_cntr;
/*
* OPTIONAL maximum number of calls
*
* max_calls is here for convenience ONLY and setting it is OPTIONAL.
* If not set, you must provide alternative means to know when the
* function is done.
*/
uint32 max_calls;
/*
* OPTIONAL pointer to result slot
*
* slot is for use when returning tuples (i.e. composite data types)
* and is not needed when returning base (i.e. scalar) data types.
*/
TupleTableSlot *slot;
/*
* OPTIONAL pointer to misc user provided context info
*
* user_fctx is for use as a pointer to your own struct to retain
* arbitrary context information between calls for your function.
*/
void *user_fctx;
/*
* OPTIONAL pointer to struct containing arrays of attribute type input
* metainfo
*
* attinmeta is for use when returning tuples (i.e. composite data types)
* and is not needed when returning base (i.e. scalar) data types. It
* is ONLY needed if you intend to use BuildTupleFromCStrings() to create
* the return tuple.
*/
AttInMetadata *attinmeta;
/*
* memory context used for structures which must live for multiple calls
*
* multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
* by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
* context for any memory that is to be re-used across multiple calls
* of the SRF.
*/
MemoryContext multi_call_memory_ctx;
} FuncCallContext;
An SRF uses several functions and macros that
automatically manipulate the FuncCallContext
structure (and expect to find it via fn_extra). Use
SRF_IS_FIRSTCALL()
to determine if your function is being called for the first or a
subsequent time. On the first call (only) use
SRF_FIRSTCALL_INIT()
to initialize the FuncCallContext. On every function call,
including the first, use
SRF_PERCALL_SETUP()
to properly set up for using the FuncCallContext
and clearing any previously returned data left over from the
previous pass.
If your function has data to return, use
SRF_RETURN_NEXT(funcctx, result)
to return it to the caller. (The result must be a
Datum, either a single value or a tuple prepared as
described earlier.) Finally, when your function is finished
returning data, use
SRF_RETURN_DONE(funcctx)
to clean up and end the SRF.
The memory context that is current when the SRF is called is
a transient context that will be cleared between calls. This means
that you do not need to pfree everything
you palloc; it will go away anyway. However, if you want to allocate
any data structures to live across calls, you need to put them somewhere
else. The memory context referenced by
multi_call_memory_ctx is a suitable location for any
data that needs to survive until the SRF is finished running. In most
cases, this means that you should switch into
multi_call_memory_ctx while doing the first-call setup.
A complete pseudo-code example looks like the following:
Datum
my_Set_Returning_Function(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
Datum result;
MemoryContext oldcontext;
[user defined declarations]
if (SRF_IS_FIRSTCALL())
{
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* one-time setup code appears here: */
[user defined code]
[if returning composite]
[build TupleDesc, and perhaps AttInMetadata]
[obtain slot]
funcctx->slot = slot;
[endif returning composite]
[user defined code]
MemoryContextSwitchTo(oldcontext);
}
/* each-time setup code appears here: */
[user defined code]
funcctx = SRF_PERCALL_SETUP();
[user defined code]
/* this is just one way we might test whether we are done: */
if (funcctx->call_cntr < funcctx->max_calls)
{
/* here we want to return another item: */
[user defined code]
[obtain result Datum]
SRF_RETURN_NEXT(funcctx, result);
}
else
{
/* here we are done returning items, and just need to clean up: */
[user defined code]
SRF_RETURN_DONE(funcctx);
}
}
A complete example of a simple SRF returning a composite type looks like:
PG_FUNCTION_INFO_V1(testpassbyval);
Datum
testpassbyval(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
int call_cntr;
int max_calls;
TupleDesc tupdesc;
TupleTableSlot *slot;
AttInMetadata *attinmeta;
/* stuff done only on the first call of the function */
if (SRF_IS_FIRSTCALL())
{
MemoryContext oldcontext;
/* create a function context for cross-call persistence */
funcctx = SRF_FIRSTCALL_INIT();
/* switch to memory context appropriate for multiple function calls */
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* total number of tuples to be returned */
funcctx->max_calls = PG_GETARG_UINT32(0);
/*
* Build a tuple description for a __testpassbyval tuple
*/
tupdesc = RelationNameGetTupleDesc("__testpassbyval");
/* allocate a slot for a tuple with this tupdesc */
slot = TupleDescGetSlot(tupdesc);
/* assign slot to function context */
funcctx->slot = slot;
/*
* Generate attribute metadata needed later to produce tuples from raw
* C strings
*/
attinmeta = TupleDescGetAttInMetadata(tupdesc);
funcctx->attinmeta = attinmeta;
MemoryContextSwitchTo(oldcontext);
}
/* stuff done on every call of the function */
funcctx = SRF_PERCALL_SETUP();
call_cntr = funcctx->call_cntr;
max_calls = funcctx->max_calls;
slot = funcctx->slot;
attinmeta = funcctx->attinmeta;
if (call_cntr < max_calls) /* do when there is more left to send */
{
char **values;
HeapTuple tuple;
Datum result;
/*
* Prepare a values array for storage in our slot.
* This should be an array of C strings which will
* be processed later by the appropriate "in" functions.
*/
values = (char **) palloc(3 * sizeof(char *));
values[0] = (char *) palloc(16 * sizeof(char));
values[1] = (char *) palloc(16 * sizeof(char));
values[2] = (char *) palloc(16 * sizeof(char));
snprintf(values[0], 16, "%d", 1 * PG_GETARG_INT32(1));
snprintf(values[1], 16, "%d", 2 * PG_GETARG_INT32(1));
snprintf(values[2], 16, "%d", 3 * PG_GETARG_INT32(1));
/* build a tuple */
tuple = BuildTupleFromCStrings(attinmeta, values);
/* make the tuple into a datum */
result = TupleGetDatum(slot, tuple);
/* Clean up (this is not actually necessary) */
pfree(values[0]);
pfree(values[1]);
pfree(values[2]);
pfree(values);
SRF_RETURN_NEXT(funcctx, result);
}
else /* do when there is no more left */
{
SRF_RETURN_DONE(funcctx);
}
}
with supporting SQL code of
CREATE TYPE __testpassbyval AS (f1 int4, f2 int4, f3 int4);
CREATE OR REPLACE FUNCTION testpassbyval(int4, int4) RETURNS setof __testpassbyval
AS 'MODULE_PATHNAME','testpassbyval' LANGUAGE 'c' IMMUTABLE STRICT;
See contrib/tablefunc for more examples of table functions.
We now turn to the more difficult task of writing
programming language functions. Be warned: this section
of the manual will not make you a programmer. You must
have a good understanding of C
(including the use of pointers)
before trying to write C functions for
use with PostgreSQL. While it may
be possible to load functions written in languages other
than C into PostgreSQL,
this is often difficult (when it is possible at all)
because other languages, such as FORTRAN
and Pascal often do not follow the same
calling convention
as C. That is, other
languages do not pass argument and return values
between functions in the same way. For this reason, we
will assume that your programming language functions
are written in C.
The basic rules for building C functions
are as follows:
Use pg_config --includedir-server to find
out where the PostgreSQL server header files are installed on
your system (or the system that your users will be running
on). This option is new with PostgreSQL 7.2.
For PostgreSQL
7.1 you should use the option --includedir.
(pg_config will exit with a non-zero status
if it encounters an unknown option.) For releases prior to
7.1 you will have to guess, but since that was before the
current calling conventions were introduced, it is unlikely
that you want to support those releases.
When allocating memory, use the
PostgreSQL routines
palloc and pfree
instead of the corresponding C library
routines malloc and
free. The memory allocated by
palloc will be freed automatically at the
end of each transaction, preventing memory leaks.
Always zero the bytes of your structures using
memset or bzero.
Several routines (such as the hash access method, hash join
and the sort algorithm) compute functions of the raw bits
contained in your structure. Even if you initialize all
fields of your structure, there may be several bytes of
alignment padding (holes in the structure) that may contain
garbage values.
Most of the internal PostgreSQL types
are declared in postgres.h, while the function
manager interfaces (PG_FUNCTION_ARGS, etc.)
are in fmgr.h, so you will need to
include at least these two files. For portability reasons it's best
to include postgres.hfirst,
before any other system or user header files.
Including postgres.h will also include
elog.h and palloc.h
for you.
Symbol names defined within object files must not conflict
with each other or with symbols defined in the
PostgreSQL server executable. You
will have to rename your functions or variables if you get
error messages to this effect.
Compiling and linking your object code so that
it can be dynamically loaded into
PostgreSQL
always requires special flags.
See Section 9.5.8
for a detailed explanation of how to do it for
your particular operating system.
Before you are able to use your
PostgreSQL extension functions written in
C, they must be compiled and linked in a special way to produce a file
that can be dynamically loaded by the server. To be
precise, a shared library needs to be created.
For more information you should read the documentation of your
operating system, in particular the manual pages for the C compiler,
cc, and the link editor, ld.
In addition, the PostgreSQL source code
contains several working examples in the
contrib directory. If you rely on these
examples you will make your modules dependent on the availability
of the PostgreSQL source code, however.
Creating shared libraries is generally analogous to linking
executables: first the source files are compiled into object files,
then the object files are linked together. The object files need to
be created as position-independent code
(PIC), which conceptually means that they can be
placed at an arbitrary location in memory when they are loaded by the
executable. (Object files intended for executables are usually not compiled
that way.) The command to link a shared library contains special
flags to distinguish it from linking an executable. --- At least
this is the theory. On some systems the practice is much uglier.
In the following examples we assume that your source code is in a
file foo.c and we will create a shared library
foo.so. The intermediate object file will be
called foo.o unless otherwise noted. A shared
library can contain more than one object file, but we only use one
here.
BSD/OS
The compiler flag to create PIC is
-fpic. The linker flag to create shared
libraries is -shared.
gcc -fpic -c foo.c
ld -shared -o foo.so foo.o
This is applicable as of version 4.0 of
BSD/OS.
FreeBSD
The compiler flag to create PIC is
-fpic. To create shared libraries the compiler
flag is -shared.
gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o
This is applicable as of version 3.0 of
FreeBSD.
HP-UX
The compiler flag of the system compiler to create
PIC is +z. When using
GCC it's -fpic. The
linker flag for shared libraries is -b. So
cc +z -c foo.c
or
gcc -fpic -c foo.c
and then
ld -b -o foo.sl foo.o
HP-UX uses the extension
.sl for shared libraries, unlike most other
systems.
IRIX
PIC is the default, no special compiler
options are necessary. The linker option to produce shared
libraries is -shared.
cc -c foo.c
ld -shared -o foo.so foo.o
Linux
The compiler flag to create PIC is
-fpic. On some platforms in some situations
-fPIC must be used if -fpic
does not work. Refer to the GCC manual for more information.
The compiler flag to create a shared library is
-shared. A complete example looks like this:
cc -fpic -c foo.c
cc -shared -o foo.so foo.o
MacOS X
Here is a sample. It assumes the developer tools are installed.
cc -c foo.c
cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o
NetBSD
The compiler flag to create PIC is
-fpic. For ELF systems, the
compiler with the flag -shared is used to link
shared libraries. On the older non-ELF systems, ld
-Bshareable is used.
gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o
OpenBSD
The compiler flag to create PIC is
-fpic. ld -Bshareable is
used to link shared libraries.
gcc -fpic -c foo.c
ld -Bshareable -o foo.so foo.o
Solaris
The compiler flag to create PIC is
-KPIC with the Sun compiler and
-fpic with GCC. To
link shared libraries, the compiler option is
-G with either compiler or alternatively
-shared with GCC.
cc -KPIC -c foo.c
cc -G -o foo.so foo.o
or
gcc -fpic -c foo.c
gcc -G -o foo.so foo.o
Tru64 UNIX
PIC is the default, so the compilation command
is the usual one. ld with special options is
used to do the linking:
cc -c foo.c
ld -shared -expect_unresolved '*' -o foo.so foo.o
The same procedure is used with GCC instead of the system
compiler; no special options are required.
UnixWare
The compiler flag to create PIC is -K
PIC with the SCO compiler and -fpic
with GCC. To link shared libraries,
the compiler option is -G with the SCO compiler
and -shared with
GCC.
cc -K PIC -c foo.c
cc -G -o foo.so foo.o
or
gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o
Tip: If you want to package your extension modules for wide distribution
you should consider using GNU
Libtool for building shared libraries. It
encapsulates the platform differences into a general and powerful
interface. Serious packaging also requires considerations about
library versioning, symbol resolution methods, and other issues.
The resulting shared library file can then be loaded into
PostgreSQL. When specifying the file name
to the CREATE FUNCTION command, one must give it
the name of the shared library file, not the intermediate object file.
Note that the system's standard shared-library extension (usually
.so or .sl) can be omitted from
the CREATE FUNCTION command, and normally should
be omitted for best portability.
Refer back to Section 9.5.1 about where the
server expects to find the shared library files.