Skip to main content

Check Cloudberry Database System

You can check a Cloudberry Database system using a variety of tools included with the system or available as plugins.

Observing the Cloudberry Database system day-to-day performance helps administrators understand the system behavior, plan workflow, and troubleshoot problems. This document introduces scenarios for diagnosing database performance and activity.

As a Cloudberry Database administrator, you need to check the system for problem events such as a segment going down or running out of disk space on a segment host. The following topics describe how to check the health of a Cloudberry Database system and examine certain state information for a Cloudberry Database system.

Check system state

A Cloudberry Database system is comprised of multiple PostgreSQL instances (the coordinator and segments) spanning multiple machines. To check a Cloudberry Database system, you need to know information about the system as a whole, as well as status information of the individual instances. The gpstate utility provides status information about a Cloudberry Database system.

View coordinator and segment status and configuration

The default gpstate action is to check segment instances and show a brief status of the valid and failed segments. For example, to see a quick status of your Cloudberry Database system:

gpstate

To see more detailed information about your Cloudberry Database array configuration, use gpstate with the -s option:

gpstate -s

View your mirroring configuration and status

If you are using mirroring for data redundancy, you might want to see the list of mirror segment instances in the system, their current synchronization status, and the mirror to primary mapping. For example, to see the mirror segments in the system and their status:

gpstate -m

To see the primary to mirror segment mappings:

gpstate -c

To see the status of the standby coordinator mirror:

gpstate -f

Check disk space usage

For database administrators, checking disk space usage is crucial. Keeping coordinator and segment data directories below 70% full is the key. Although a filled disk does not corrupt data, it can stop regular database activity, and eventually force server shutdown.

You can use the gp_disk_free external table in the gp_toolkit administrative schema to check for remaining free space (in kilobytes) on the segment host file systems. For example:

SELECT * FROM gp_toolkit.gp_disk_free ORDER BY dfsegment;

Check the sizing of distributed databases and tables

The gp_toolkit administrative schema contains several views that you can use to determine the disk space usage for a distributed Cloudberry Database database, schema, table, or index.

View disk space usage for a database

To see the total size of a database (in bytes), use the gp_size_of_database view in the gp_toolkit administrative schema. For example:

SELECT * FROM gp_toolkit.gp_size_of_database ORDER BY sodddatname;

View disk space usage for a table

The gp_toolkit administrative schema contains several views for checking the size of a table. The table sizing views list the table by object ID (not by name). For example:

SELECT relname AS name, sotdsize AS size, sotdtoastsize 
AS toast, sotdadditionalsize AS other
FROM gp_toolkit.gp_size_of_table_disk as sotd, pg_class
WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;

To check the size of a table by name, you need to look up the relation name (relname) in the pg_class table. For example, to see the size of the table test_table:

SELECT relname AS name, sotdsize AS size, sotdtoastsize 
AS toast, sotdadditionalsize AS other
FROM gp_toolkit.gp_size_of_table_disk as sotd, pg_class
WHERE sotd.sotdoid=pg_class.oid AND pg_class.relname = 'test_table'
ORDER BY relname;

View disk space usage for indexes

The gp_toolkit administrative schema contains a number of views for checking index sizes. To see the total size of all index(es) on a table, use the gp_size_of_all_table_indexes view. To see the size of a particular index, use the gp_size_of_index view. The index sizing views list tables and indexes by object ID (not by name). For example, to see the size of all indexes on a table:

SELECT soisize, relname as indexname
FROM pg_class, gp_toolkit.gp_size_of_index
WHERE pg_class.oid=gp_size_of_index.soioid
AND pg_class.relkind='i';

To check the size of an index by name, you need to look up the relation name (relname) in the pg_class table. For example, to check the size of the index test_index:

SELECT pg_class.relname AS indexname, gp_toolkit.gp_size_of_index.soioid, gp_toolkit.gp_size_of_index.soisize
FROM pg_class, gp_toolkit.gp_size_of_index
WHERE pg_class.oid = gp_toolkit.gp_size_of_index.soioid
AND pg_class.relkind = 'i'
AND pg_class.relname = 'test_index';

Check for data distribution skew

All tables in Cloudberry Database are distributed, meaning their data is divided across all of the segments in the system. Unevenly distributed data might diminish query processing performance. A table's distribution policy, set at table creation time, determines how the table's rows are distributed. For information about choosing the table distribution policy, see the following topics:

The gp_toolkit administrative schema also contains a number of views for checking data distribution skew on a table.

View a table's distribution key

To see the columns used as the data distribution key for a table, you can use the \d+ meta-command in psql to examine the definition of a table. For example:

=# \d+ sales

Table "retail.sales"
Column      | Type     | Modifiers | Description
-------------+--------------+-----------+-------------
sale_id | integer | |
amt         | float        | |
date        | date         | |
Has OIDs: no
Distributed by: (sale_id)

When you create a replicated table, Cloudberry Database stores all rows in the table on every segment. Replicated tables have no distribution key. Where the \d+ meta-command reports the distribution key for a normally distributed table, it shows Distributed Replicated for a replicated table.

View data distribution

To see the data distribution of a table's rows (the number of rows on each segment), you can run a query such as:

SELECT gp_segment_id, count(*) 
FROM <table_name> GROUP BY gp_segment_id;

A table is considered to have a balanced distribution if all segments have roughly the same number of rows.

tip

If you run this query on a replicated table, it fails because Cloudberry Database does not permit user queries to reference the system column gp_segment_id (or the system columns ctid, cmin, cmax, xmin, and xmax) in replicated tables. Because every segment has all of the tables' rows, replicated tables are evenly distributed by definition.

Check for query processing skew

When a query is being processed, all segments should have equal workloads to ensure the best possible performance. If you identify a poorly-performing query, you might need to investigate further using the EXPLAIN command.

Query processing workload can be skewed if the table's data distribution policy and the query predicates are not well matched. To check for processing skew, you can run a query such as:

=# SELECT gp_segment_id, count(*) FROM <table_name>
   WHERE <column>='<value>' GROUP BY gp_segment_id;

This will show the number of rows returned by segment for the given WHERE predicate.

As noted in Viewing Data Distribution, this query will fail if you run it on a replicated table because you cannot reference the gp_segment_id system column in a query on a replicated table.

Avoid an extreme skew warning

You might receive the following warning message while running a query that performs a hash join operation:

Extreme skew in the innerside of Hashjoin

This occurs when the input to a hash join operator is skewed. It does not prevent the query from completing successfully. You can follow these steps to avoid skew in the plan:

  1. Ensure that all fact tables are analyzed.

  2. Verify that any populated temporary table used by the query is analyzed.

  3. View the EXPLAIN ANALYZE plan for the query and look for the following:

    • If there are scans with multi-column filters that are producing more rows than estimated, then set the gp_selectivity_damping_factor server configuration parameter to 2 or higher and retest the query.
    • If the skew occurs while joining a single fact table that is relatively small (less than 5000 rows), set the gp_segments_for_planner server configuration parameter to 1 and retest the query.
  4. Check whether the filters applied in the query match distribution keys of the base tables. If the filters and distribution keys are the same, consider redistributing some of the base tables with different distribution keys.

  5. Check the cardinality of the join keys. If they have low cardinality, try to rewrite the query with different joining columns or additional filters on the tables to reduce the number of rows. These changes could change the query semantics.

View metadata information about database objects

Cloudberry Database tracks various metadata information in its system catalogs about the objects stored in a database, such as tables, views, indexes and so on, as well as global objects such as roles and tablespaces.

View the last operation performed

You can use the system views pg_stat_operations and pg_stat_partition_operations to look up actions performed on an object, such as a table. For example, to see the actions performed on a table, such as when it was created and when it was last vacuumed and analyzed:

=> SELECT schemaname as schema, objname as table, 
   usename as role, actionname as action,
   subtype as type, statime as time
   FROM pg_stat_operations
   WHERE objname='test_table';

schema | table | role | action | type | time
--------+------------+---------+---------+-------+-------------------------------
public | test_table | gpadmin | CREATE | TABLE | 2024-01-16 15:26:36.998256+08
public | test_table | gpadmin | VACUUM | | 2024-01-16 15:26:42.073407+08
public | test_table | gpadmin | ANALYZE | | 2024-01-16 15:26:45.97546+08
(3 rows)

View the definition of an object

To see the definition of an object, such as a table or view, you can use the \d+ meta-command when working in psql. For example, to see the definition of a table:

\d+ <mytable>

View session memory usage information

You can create and use the session_level_memory_consumption view that provides information about the current memory utilization for sessions that are running queries on Cloudberry Database. The view contains session information and information such as the database that the session is connected to, the query that the session is currently running, and memory consumed by the session processes.

Create the session_level_memory_consumption view

To create the session_state.session_level_memory_consumption view in a Cloudberry Database, run the command CREATE EXTENSION gp_internal_tools; once for each database. For example, to install the view in the database testdb, use this command:

psql -d testdb -c "CREATE EXTENSION gp_internal_tools;"

About the session_level_memory_consumption view

The session_state.session_level_memory_consumption view provides information about memory consumption and idle time for sessions that are running SQL queries.

When resource queue-based resource management is active, the column is_runaway indicates whether Cloudberry Database considers the session a runaway session based on the vmem memory consumption of the session's queries. Under the resource queue-based resource management scheme, Cloudberry Database considers the session a runaway when the queries consume an excessive amount of memory. The Cloudberry Database server configuration parameter runaway_detector_activation_percent controls the conditions under which Cloudberry Database considers a session a runaway session.

The is_runaway, runaway_vmem_mb, and runaway_command_cnt columns are not applicable when resource group-based resource management is active.

columntypereferencesdescription
datnamename Name of the database that the session is connected to.
sess_idinteger Session ID.
usenamename Name of the session user.
querytext Current SQL query that the session is running.
segidinteger Segment ID.
vmem_mbinteger Total vmem memory usage for the session in MB.
is_runawayboolean Session is marked as runaway on the segment.
qe_countinteger Number of query processes for the session.
active_qe_countinteger Number of active query processes for the session.
dirty_qe_countinteger Number of query processes that have not yet released their memory. The value is -1 for sessions that are not running.
runaway_vmem_mbinteger Amount of vmem memory that the session was consuming when it was marked as a runaway session.
runaway_command_cntinteger Command count for the session when it was marked as a runaway session.
idle_starttimestamptz The last time a query process in this session became idle.

View and log per-process memory usage information

Cloudberry Database allocates all memory within memory contexts. Memory contexts are a convenient way to manage memory that needs to live for differing amounts of time. Destroying a context releases all of the memory that was allocated in it.

Tracking the amount of memory used by a server process or a long-running query can help detect the source of a potential out-of-memory condition. Cloudberry Database provides a system view and administration functions that you can use for this purpose.

About the pg_backend_memory_contexts view

To display the memory usage of all active memory contexts in the server process attached to the current session, use the pg_backend_memory_contexts system view. This view is restricted to superusers, but access might be granted to other roles.

SELECT * FROM pg_backend_memory_contexts;

About the memory context admin functions

You can use the system administration function pg_log_backend_memory_contexts() to instruct Cloudberry Database to dump the memory usage of other sessions running on the coordinator host into the server log. Execution of this function is restricted to superusers only, and cannot be granted to other roles.

The signature of pg_log_backend_memory_contexts() function follows:

pg_log_backend_memory_contexts( pid integer )

where pid identifies the process whose memory contexts you want dumped.

pg_log_backend_memory_contexts() returns t when memory context logging is successfully activated for the process on the local host. When logging is activated, Cloudberry Database writes one message to the log for each memory context at the LOG message level. The log messages appear in the server log based on the log configuration set; refer to Error Reporting and Logging in the PostgreSQL documentation for more information. The memory context log messages are not sent to the client.

Sample log messages

The command:

SELECT pg_log_backend_memory_contexts( pg_backend_pid() );

triggered the dumping of the following (subset of) memory context messages to the local server log file:

2024-01-16 16:45:57.228512 UTC,"gpadmin","testdb",p16389,th-557447104,"[local]",,2024-01-16 15:57:32 UTC,0,,cmd10,seg-1,,,,sx1,"LOG","00000","logging memory contexts of PID 16389",,,,,,"SELECT pg_log_backend_memory_contexts(pg_backend_pid());",0,,"mcxt.c",1278,
2024-01-16 16:45:57.229275 UTC,"gpadmin","testdb",p16389,th-557447104,"[local]",,2024-01-16 15:57:32 UTC,0,,cmd10,seg-1,,,,sx1,"LOG","00000","level: 0; TopMemoryContext: 108384 total in 6 blocks; 23248 free (21 chunks); 85136 used",,,,,,,0,,"mcxt.c",884,
2024-01-16 16:45:57.229822 UTC,"gpadmin","testdb",p16389,th-557447104,"[local]",,2024-01-16 15:57:32 UTC,0,,cmd10,seg-1,,,,sx1,"LOG","00000","level: 1; pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1416 free (0 chunks); 6776 used",,,,,,,0,,"mcxt.c",884,
2024-01-16 16:45:57.230387 UTC,"gpadmin","testdb",p16389,th-557447104,"[local]",,2024-01-16 15:57:32 UTC,0,,cmd10,seg-1,,,,sx1,"LOG","00000","level: 1; TopTransactionContext: 8192 total in 1 blocks; 7576 free (1 chunks); 616 used",,,,,,,0,,"mcxt.c",884,
2024-01-16 16:45:57.230961 UTC,"gpadmin","testdb",p16389,th-557447104,"[local]",,2024-01-16 15:57:32 UTC,0,,cmd10,seg-1,,,,sx1,"LOG","00000","level: 1; TableSpace cache: 8192 total in 1 blocks; 2056 free (0 chunks); 6136 used",,,,,,,0,,"mcxt.c",884,

View query workfile usage information

The Cloudberry Database administrative schema gp_toolkit contains views that display information about Cloudberry Database workfiles. Cloudberry Database creates workfiles on disk if it does not have sufficient memory to run the query in memory. This information can be used for troubleshooting and tuning queries. The information in the views can also be used to specify the values for the Cloudberry Database configuration parameters gp_workfile_limit_per_query and gp_workfile_limit_per_segment.

These are the views in the schema gp_toolkit:

  • The gp_workfile_entries view contains one row for each operator using disk space for workfiles on a segment at the current time.
  • The gp_workfile_usage_per_query view contains one row for each query using disk space for workfiles on a segment at the current time.
  • The gp_workfile_usage_per_segment view contains one row for each segment. Each row displays the total amount of disk space used for workfiles on the segment at the current time.

For information about using gp_toolkit, see Using gp_toolkit.

View the database server log files

Every database instance in Cloudberry Database (coordinator and segments) runs a PostgreSQL database server with its own server log file. Log files are created in the log directory of the coordinator and each segment data directory.

Log file format

The server log files are written in comma-separated values (CSV) format. Some log entries will not have values for all log fields. For example, only log entries associated with a query worker process will have the slice_id populated. You can identify related log entries of a particular query by the query's session identifier (gp_session_id) and command identifier (gp_command_count).

The following fields are written to the log:

NumberField NameData TypeDescription
1event_timetimestamp with time zoneTime that the log entry was written to the log
2user_namevarchar(100)The database user name
3database_namevarchar(100)The database name
4process_idvarchar(10)The system process ID (prefixed with "p")
5thread_idvarchar(50)The thread count (prefixed with "th")
6remote_hostvarchar(100)On the coordinator, the hostname/address of the client machine. On the segment, the hostname/address of the coordinator.
7remote_portvarchar(10)The segment or coordinator port number
8session_start_timetimestamp with time zoneTime session connection was opened
9transaction_idintTop-level transaction ID on the coordinator. This ID is the parent of any subtransactions.
10gp_session_idtextSession identifier number (prefixed with "con")
11gp_command_counttextThe command number within a session (prefixed with "cmd")
12gp_segmenttextThe segment content identifier (prefixed with "seg" for primaries or "mir" for mirrors). The coordinator always has a content ID of -1.
13slice_idtextThe slice ID (portion of the query plan being executed)
14distr_tranx_idtextDistributed transaction ID
15local_tranx_idtextLocal transaction ID
16sub_tranx_idtextSubtransaction ID
17event_severityvarchar(10)Values include: LOG, ERROR, FATAL, PANIC, DEBUG1, DEBUG2
18sql_state_codevarchar(10)SQL state code associated with the log message
19event_messagetextLog or error message text
20event_detailtextDetail message text associated with an error or warning message
21event_hinttextHint message text associated with an error or warning message
22internal_querytextThe internally-generated query text
23internal_query_posintThe cursor index into the internally-generated query text
24event_contexttextThe context in which this message gets generated
25debug_query_stringtextUser-supplied query string with full detail for debugging. This string can be modified for internal use.
26error_cursor_posintThe cursor index into the query string
27func_nametextThe function in which this message is generated
28file_nametextThe internal code file where the message originated
29file_lineintThe line of the code file where the message originated
30stack_tracetextStack trace text associated with this message

Search the Cloudberry Database server log files

Cloudberry Database provides a utility called gplogfilter can search through a Cloudberry Database log file for entries matching the specified criteria. By default, this utility searches through the Cloudberry Database coordinator log file in the default logging location. For example, to display the last three lines of each of the log files under the coordinator directory:

gplogfilter -n 3

Use gp_toolkit

Use the Cloudberry Database administrative schema gp_toolkit to query the system catalogs, log files, and operating environment for system status information. The gp_toolkit schema contains several views you can access using SQL commands. The gp_toolkit schema is accessible to all database users. Some objects require superuser permissions. Use a command similar to the following to add the gp_toolkit schema to your schema search path:

=> ALTER ROLE myrole SET search_path TO myschema,gp_toolkit;

SQL standard error codes

The following table lists all the defined error codes. Some are not used, but are defined by the SQL standard. The error classes are also shown. For each error class there is a standard error code having the last 3 characters 000. This code is used only for error conditions that fall within the class but do not have any more-specific code assigned.

The PL/pgSQL condition name for each error code is the same as the phrase shown in the table, with underscores substituted for spaces. For example, code 22012, DIVISION BY ZERO, has condition name DIVISION_BY_ZERO. Condition names can be written in either upper or lower case.

tip

How to view error codes

When you execute SQL queries or perform other database operations in Cloudberry Database, and an error occurs, the system returns an error message. However, this standard error message may not directly display the SQLSTATE error code. Here are some methods to view these error codes:

  • Use PL/pgSQL exception handling. For example:

    DO $$
    BEGIN
    -- Replaces the following SQL with your query
    EXECUTE 'Your SQL Query';
    EXCEPTION WHEN OTHERS THEN
    RAISE NOTICE 'Error code: %', SQLSTATE;
    END
    $$;
  • Check the database log. Cloudberry Database records detailed error information, including error codes, in its log files. Depending on your system setup, you can check the log files on the database server for this information.

  • Use advanced database client tools. Some advanced database client or management tools may offer more detailed error reporting features that can directly display SQLSTATE error codes.

Note:

Not all errors have a specific SQLSTATE error code. Some errors might only have a general error class code, like 'XX000' for an internal error.

When using PL/pgSQL exception handling, ensure that your SQL query statement is correctly formatted as a string, especially when executing SQL statements dynamically.

PL/pgSQL does not recognize warning, as opposed to error, condition names; those are classes 00, 01, and 02.

Error CodeMeaningConstant
Class 00— Successful Completion
00000SUCCESSFUL COMPLETIONsuccessful_completion
Class 01 — Warning
01000WARNINGwarning
0100CDYNAMIC RESULT SETS RETURNEDdynamic_result_sets_returned
01008IMPLICIT ZERO BIT PADDINGimplicit_zero_bit_padding
01003NULL VALUE ELIMINATED IN SET FUNCTIONnull_value_eliminated_in_set_function
01007PRIVILEGE NOT GRANTEDprivilege_not_granted
01006PRIVILEGE NOT REVOKEDprivilege_not_revoked
01004STRING DATA RIGHT TRUNCATIONstring_data_right_truncation
01P01DEPRECATED FEATUREdeprecated_feature
Class 02 — No Data (this is also a warning class per the SQL standard)
02000NO DATAno_data
02001NO ADDITIONAL DYNAMIC RESULT SETS RETURNEDno_additional_dynamic_result_sets_returned
Class 03 — SQL Statement Not Yet Complete
03000SQL STATEMENT NOT YET COMPLETEsql_statement_not_yet_complete
Class 08 — Connection Exception
08000CONNECTION EXCEPTIONconnection_exception
08003CONNECTION DOES NOT EXISTconnection_does_not_exist
08006CONNECTION FAILUREconnection_failure
08001SQLCLIENT UNABLE TO ESTABLISH SQLCONNECTIONsqlclient_unable_to_establish_sqlconnection
08004SQLSERVER REJECTED ESTABLISHMENT OF SQLCONNECTIONsqlserver_rejected_establishment_of_sqlconnection
08007TRANSACTION RESOLUTION UNKNOWNtransaction_resolution_unknown
08P01PROTOCOL VIOLATIONprotocol_violation
Class 09 — Triggered Action Exception
09000TRIGGERED ACTION EXCEPTIONtriggered_action_exception
Class 0A — Feature Not Supported
0A000FEATURE NOT SUPPORTEDfeature_not_supported
Class 0B — Invalid Transaction Initiation
0B000INVALID TRANSACTION INITIATIONinvalid_transaction_initiation
Class 0F — Locator Exception
0F000LOCATOR EXCEPTIONlocator_exception
0F001INVALID LOCATOR SPECIFICATIONinvalid_locator_specification
Class 0L — Invalid Grantor
0L000INVALID GRANTORinvalid_grantor
0LP01INVALID GRANT OPERATIONinvalid_grant_operation
Class 0P — Invalid Role Specification
0P000INVALID ROLE SPECIFICATIONinvalid_role_specification
Class 21 — Cardinality Violation
21000CARDINALITY VIOLATIONcardinality_violation
Class 22 — Data Exception
22000DATA EXCEPTIONdata_exception
2202EARRAY SUBSCRIPT ERRORarray_subscript_error
22021CHARACTER NOT IN REPERTOIREcharacter_not_in_repertoire
22008DATETIME FIELD OVERFLOWdatetime_field_overflow
22012DIVISION BY ZEROdivision_by_zero
22005ERROR IN ASSIGNMENTerror_in_assignment
2200BESCAPE CHARACTER CONFLICTescape_character_conflict
22022INDICATOR OVERFLOWindicator_overflow
22015INTERVAL FIELD OVERFLOWinterval_field_overflow
2201EINVALID ARGUMENT FOR LOGARITHMinvalid_argument_for_logarithm
2201FINVALID ARGUMENT FOR POWER FUNCTIONinvalid_argument_for_power_function
2201GINVALID ARGUMENT FOR WIDTH BUCKET FUNCTIONinvalid_argument_for_width_bucket_function
22018INVALID CHARACTER VALUE FOR CASTinvalid_character_value_for_cast
22007INVALID DATETIME FORMATinvalid_datetime_format
22019INVALID ESCAPE CHARACTERinvalid_escape_character
2200DINVALID ESCAPE OCTETinvalid_escape_octet
22025INVALID ESCAPE SEQUENCEinvalid_escape_sequence
22P06NONSTANDARD USE OF ESCAPE CHARACTERnonstandard_use_of_escape_character
22010INVALID INDICATOR PARAMETER VALUEinvalid_indicator_parameter_value
22020INVALID LIMIT VALUEinvalid_limit_value
22023INVALID PARAMETER VALUEinvalid_parameter_value
2201BINVALID REGULAR EXPRESSIONinvalid_regular_expression
22009INVALID TIME ZONE DISPLACEMENT VALUEinvalid_time_zone_displacement_value
2200CINVALID USE OF ESCAPE CHARACTERinvalid_use_of_escape_character
2200GMOST SPECIFIC TYPE MISMATCHmost_specific_type_mismatch
22004NULL VALUE NOT ALLOWEDnull_value_not_allowed
22002NULL VALUE NO INDICATOR PARAMETERnull_value_no_indicator_parameter
22003NUMERIC VALUE OUT OF RANGEnumeric_value_out_of_range
22026STRING DATA LENGTH MISMATCHstring_data_length_mismatch
22001STRING DATA RIGHT TRUNCATIONstring_data_right_truncation
22011SUBSTRING ERRORsubstring_error
22027TRIM ERRORtrim_error
22024UNTERMINATED C STRINGunterminated_c_string
2200FZERO LENGTH CHARACTER STRINGzero_length_character_string
22P01FLOATING POINT EXCEPTIONfloating_point_exception
22P02INVALID TEXT REPRESENTATIONinvalid_text_representation
22P03INVALID BINARY REPRESENTATIONinvalid_binary_representation
22P04BAD COPY FILE FORMATbad_copy_file_format
22P05UNTRANSLATABLE CHARACTERuntranslatable_character
Class 23 — Integrity Constraint Violation
23000INTEGRITY CONSTRAINT VIOLATIONintegrity_constraint_violation
23001RESTRICT VIOLATIONrestrict_violation
23502NOT NULL VIOLATIONnot_null_violation
23503FOREIGN KEY VIOLATIONforeign_key_violation
23505UNIQUE VIOLATIONunique_violation
23514CHECK VIOLATIONcheck_violation
Class 24 — Invalid Cursor State
24000INVALID CURSOR STATEinvalid_cursor_state
Class 25 — Invalid Transaction State
25000INVALID TRANSACTION STATEinvalid_transaction_state
25001ACTIVE SQL TRANSACTIONactive_sql_transaction
25002BRANCH TRANSACTION ALREADY ACTIVEbranch_transaction_already_active
25008HELD CURSOR REQUIRES SAME ISOLATION LEVELheld_cursor_requires_same_isolation_level
25003INAPPROPRIATE ACCESS MODE FOR BRANCH TRANSACTIONinappropriate_access_mode_for_branch_transaction
25004INAPPROPRIATE ISOLATION LEVEL FOR BRANCH TRANSACTIONinappropriate_isolation_level_for_branch_transaction
25005NO ACTIVE SQL TRANSACTION FOR BRANCH TRANSACTIONno_active_sql_transaction_for_branch_transaction
25006READ ONLY SQL TRANSACTIONread_only_sql_transaction
25007SCHEMA AND DATA STATEMENT MIXING NOT SUPPORTEDschema_and_data_statement_mixing_not_supported
25P01NO ACTIVE SQL TRANSACTIONno_active_sql_transaction
25P02IN FAILED SQL TRANSACTIONin_failed_sql_transaction
Class 26 — Invalid SQL Statement Name
26000INVALID SQL STATEMENT NAMEinvalid_sql_statement_name
Class 27 — Triggered Data Change Violation
27000TRIGGERED DATA CHANGE VIOLATIONtriggered_data_change_violation
Class 28 — Invalid Authorization Specification
28000INVALID AUTHORIZATION SPECIFICATIONinvalid_authorization_specification
Class 2B — Dependent Privilege Descriptors Still Exist
2B000DEPENDENT PRIVILEGE DESCRIPTORS STILL EXISTdependent_privilege_descriptors_still_exist
2BP01DEPENDENT OBJECTS STILL EXISTdependent_objects_still_exist
Class 2D — Invalid Transaction Termination
2D000INVALID TRANSACTION TERMINATIONinvalid_transaction_termination
Class 2F — SQL Routine Exception
2F000SQL ROUTINE EXCEPTIONsql_routine_exception
2F005FUNCTION EXECUTED NO RETURN STATEMENTfunction_executed_no_return_statement
2F002MODIFYING SQL DATA NOT PERMITTEDmodifying_sql_data_not_permitted
2F003PROHIBITED SQL STATEMENT ATTEMPTEDprohibited_sql_statement_attempted
2F004READING SQL DATA NOT PERMITTEDreading_sql_data_not_permitted
Class 34 — Invalid Cursor Name
34000INVALID CURSOR NAMEinvalid_cursor_name
Class 38 — External Routine Exception
38000EXTERNAL ROUTINE EXCEPTIONexternal_routine_exception
38001CONTAINING SQL NOT PERMITTEDcontaining_sql_not_permitted
38002MODIFYING SQL DATA NOT PERMITTEDmodifying_sql_data_not_permitted
38003PROHIBITED SQL STATEMENT ATTEMPTEDprohibited_sql_statement_attempted
38004READING SQL DATA NOT PERMITTEDreading_sql_data_not_permitted
Class 39 — External Routine Invocation Exception
39000EXTERNAL ROUTINE INVOCATION EXCEPTIONexternal_routine_invocation_exception
39001INVALID SQLSTATE RETURNEDinvalid_sqlstate_returned
39004NULL VALUE NOT ALLOWEDnull_value_not_allowed
39P01TRIGGER PROTOCOL VIOLATEDtrigger_protocol_violated
39P02SRF PROTOCOL VIOLATEDsrf_protocol_violated
Class 3B — Savepoint Exception
3B000SAVEPOINT EXCEPTIONsavepoint_exception
3B001INVALID SAVEPOINT SPECIFICATIONinvalid_savepoint_specification
Class 3D — Invalid Catalog Name
3D000INVALID CATALOG NAMEinvalid_catalog_name
Class 3F — Invalid Schema Name
3F000INVALID SCHEMA NAMEinvalid_schema_name
Class 40 — Transaction Rollback
40000TRANSACTION ROLLBACKtransaction_rollback
40002TRANSACTION INTEGRITY CONSTRAINT VIOLATIONtransaction_integrity_constraint_violation
40001SERIALIZATION FAILUREserialization_failure
40003STATEMENT COMPLETION UNKNOWNstatement_completion_unknown
40P01DEADLOCK DETECTEDdeadlock_detected
Class 42 — Syntax Error or Access Rule Violation
42000SYNTAX ERROR OR ACCESS RULE VIOLATIONsyntax_error_or_access_rule_violation
42601SYNTAX ERRORsyntax_error
42501INSUFFICIENT PRIVILEGEinsufficient_privilege
42846CANNOT COERCEcannot_coerce
42803GROUPING ERRORgrouping_error
42830INVALID FOREIGN KEYinvalid_foreign_key
42602INVALID NAMEinvalid_name
42622NAME TOO LONGname_too_long
42939RESERVED NAMEreserved_name
42804DATATYPE MISMATCHdatatype_mismatch
42P18INDETERMINATE DATATYPEindeterminate_datatype
42809WRONG OBJECT TYPEwrong_object_type
42703UNDEFINED COLUMNundefined_column
42883UNDEFINED FUNCTIONundefined_function
42P01UNDEFINED TABLEundefined_table
42P02UNDEFINED PARAMETERundefined_parameter
42704UNDEFINED OBJECTundefined_object
42701DUPLICATE COLUMNduplicate_column
42P03DUPLICATE CURSORduplicate_cursor
42P04DUPLICATE DATABASEduplicate_database
42723DUPLICATE FUNCTIONduplicate_function
42P05DUPLICATE PREPARED STATEMENTduplicate_prepared_statement
42P06DUPLICATE SCHEMAduplicate_schema
42P07DUPLICATE TABLEduplicate_table
42712DUPLICATE ALIASduplicate_alias
42710DUPLICATE OBJECTduplicate_object
42702AMBIGUOUS COLUMNambiguous_column
42725AMBIGUOUS FUNCTIONambiguous_function
42P08AMBIGUOUS PARAMETERambiguous_parameter
42P09AMBIGUOUS ALIASambiguous_alias
42P10INVALID COLUMN REFERENCEinvalid_column_reference
42611INVALID COLUMN DEFINITIONinvalid_column_definition
42P11INVALID CURSOR DEFINITIONinvalid_cursor_definition
42P12INVALID DATABASE DEFINITIONinvalid_database_definition
42P13INVALID FUNCTION DEFINITIONinvalid_function_definition
42P14INVALID PREPARED STATEMENT DEFINITIONinvalid_prepared_statement_definition
42P15INVALID SCHEMA DEFINITIONinvalid_schema_definition
42P16INVALID TABLE DEFINITIONinvalid_table_definition
42P17INVALID OBJECT DEFINITIONinvalid_object_definition
Class 44 — WITH CHECK OPTION Violation
44000WITH CHECK OPTION VIOLATIONwith_check_option_violation
Class 53 — Insufficient Resources
53000INSUFFICIENT RESOURCESinsufficient_resources
53100DISK FULLdisk_full
53200OUT OF MEMORYout_of_memory
53300TOO MANY CONNECTIONStoo_many_connections
Class 54 — Program Limit Exceeded
54000PROGRAM LIMIT EXCEEDEDprogram_limit_exceeded
54001STATEMENT TOO COMPLEXstatement_too_complex
54011TOO MANY COLUMNStoo_many_columns
54023TOO MANY ARGUMENTStoo_many_arguments
Class 55 — Object Not In Prerequisite State
55000OBJECT NOT IN PREREQUISITE STATEobject_not_in_prerequisite_state
55006OBJECT IN USEobject_in_use
55P02CANT CHANGE RUNTIME PARAMcant_change_runtime_param
55P03LOCK NOT AVAILABLElock_not_available
Class 57 — Operator Intervention
57000OPERATOR INTERVENTIONoperator_intervention
57014QUERY CANCELEDquery_canceled
57P01ADMIN SHUTDOWNadmin_shutdown
57P02CRASH SHUTDOWNcrash_shutdown
57P03CANNOT CONNECT NOWcannot_connect_now
Class 58 — System Error (errors external to Cloudberry Database )
58030IO ERRORio_error
58P01UNDEFINED FILEundefined_file
58P02DUPLICATE FILEduplicate_file
Class F0 — Configuration File Error
F0000CONFIG FILE ERRORconfig_file_error
F0001LOCK FILE EXISTSlock_file_exists
Class P0 — PL/pgSQL Error
P0000PLPGSQL ERRORplpgsql_error
P0001RAISE EXCEPTIONraise_exception
P0002NO DATA FOUNDno_data_found
P0003TOO MANY ROWStoo_many_rows
Class XX — Internal Error
XX000INTERNAL ERRORinternal_error
XX001DATA CORRUPTEDdata_corrupted
XX002INDEX CORRUPTEDindex_corrupted