Resource Management

Configuration Parameters

The following table lists the configurable resource group parameters.

Parameter Description Value Range Default
CONCURRENCY Maximum number of concurrent transactions (including active and idle) allowed in the resource group. [0 – max_connections] 20
CPU_MAX_PERCENT Maximum percentage of CPU resources that the resource group can use. [1 – 100] -1 (disabled)
CPU_WEIGHT Scheduling priority weight for the resource group. [1 – 500] 100
CPUSET Specific logical CPU cores (or hyperthreading threads) reserved for this resource group. System-dependent -1
MEMORY_QUOTA Memory limit (in MB) assigned to the resource group. Integer (MB) -1 (disabled; uses statement_mem as per-query memory limit)
MIN_COST Minimum plan cost required for a query to be managed by the resource group. Integer 0

Note!
Resource groups do not apply to SET, RESET, or SHOW commands.

Configuration Details

When a user runs a query, YMatrix evaluates the query against the limits defined for its assigned resource group.

Concurrency Limits

CONCURRENCY controls the maximum number of concurrent transactions allowed in a resource group. The default is 20. A value of 0 disables query execution for the group.

If resource limits are not exceeded and the query does not violate the concurrency cap, YMatrix executes the query immediately. If the concurrency limit is reached, YMatrix queues any subsequent transactions until running queries complete.

The parameter gp_resource_group_queuing_timeout specifies how long a queued transaction waits before being canceled. The default is 0, meaning indefinite queuing.

Bypassing Resource Group Limits

  • gp_resource_group_bypass: Enables or disables bypassing concurrency limits. When set to true, queries skip concurrency checks and execute immediately. Memory is allocated based on statement_mem; insufficient memory causes query failure. This parameter can only be set at the session level and cannot be changed within a transaction or function.
  • gp_resource_group_bypass_catalog_query: Controls whether catalog queries bypass resource group limits. Default is true. Useful for GUI clients that run metadata queries against system catalogs. These queries operate outside resource groups and use statement_mem for memory allocation.
  • gp_resource_group_bypass_direct_dispatch: Determines if direct dispatch queries bypass resource group limits. When true, such queries ignore CPU and memory constraints of their assigned group and execute immediately. Memory is allocated per statement_mem; insufficient memory leads to failure. This setting is session-scoped and cannot be modified inside transactions or functions.

CPU Limits

YMatrix supports two CPU allocation modes:

  1. Percentage-based CPU allocation
  2. Core-based CPU allocation

Different resource groups in the same cluster may use different allocation modes, but each group uses only one mode at a time. The allocation mode can be changed at runtime.

The global parameter gp_resource_group_cpu_limit defines the maximum percentage of system CPU resources that can be allocated to resource groups on each segment node.

Core-Based CPU Allocation

CPUSET reserves specific CPU cores for a resource group. When CPUSET is configured, YMatrix disables CPU_MAX_PERCENT and CPU_WEIGHT for that group and sets both to -1.

Usage Notes:

  • Use a semicolon (;) to separate CPU core specifications for the master and segments. Within each part, use commas (,) to list individual cores or ranges, enclosed in single quotes (' '). For example, '1;1,3-4' assigns core 1 on the master and cores 1, 3, and 4 on segments.
  • Avoid using CPU core 0. Prefer lower-numbered cores when assigning cores to resource groups. If you later restore the database on a node with fewer CPU cores (e.g., moving from a 16-core to an 8-core system), operations may fail. For instance, assigning core 9 on a 16-core system will cause failure when restoring to an 8-core node.

Percentage-Based CPU Allocation

CPU_MAX_PERCENT sets the hard upper limit on CPU usage for a resource group on each segment. For example, a value of 40 allows up to 40% of available CPU resources. Idle time from underutilized groups is pooled into a global unused CPU cycle pool, which other groups can borrow from.

CPU_WEIGHT determines the relative CPU time share for the group. The default is 100, with a valid range of 1 to 500.

Usage Notes:

  • If one group has a weight of 100 and two others have weights of 50 each—and all are trying to use 100% CPU (i.e., CPU_MAX_PERCENT = 100 for all)—the first group receives 50% of total CPU time, and the other two receive 25% each.
  • Adding another group with weight 100 (and CPU_MAX_PERCENT = 100) changes the shares: the original group gets 33%, the two 50-weight groups get ~16.5% each, and the new group gets 33%.

Configuration Example

Group Name CONCURRENCY CPU_MAX_PERCENT CPU_WEIGHT
default_group 20 50 10
admin_group 10 70 30
system_group 10 30 10
test 10 10 10
  • Roles in default_group receive a baseline CPU share of 10/(10+30+10+10) = 16% under high load. With idle CPU available, they can use up to the hard limit of 50%.
  • Roles in admin_group receive 30/(10+30+10+10) = 50% under high load, with a hard cap of 70% when idle CPU is available.
  • Roles in test receive a baseline of 16%, but are capped at 10% due to their CPU_MAX_PERCENT setting—even when the system is idle.

Memory Limits

MEMORY_QUOTA specifies the total memory (in MB) reserved for the resource group on each segment. This is the aggregate memory available to all active queries in the group on that segment. By default, each query is allocated MEMORY_QUOTA / CONCURRENCY MB.

To override this, a session can set gp_resgroup_memory_query_fixed_mem to specify a fixed memory amount for a query, which may exceed the group’s per-query allocation.

Usage Notes:

  • If gp_resgroup_memory_query_fixed_mem is set, it overrides the resource group’s memory allocation.
  • If unset, memory per query is MEMORY_QUOTA / CONCURRENCY.
  • If MEMORY_QUOTA is not set (i.e., -1), statement_mem is used as the per-query memory limit.
  • All queries spill to disk if system memory is insufficient. If the spill file limit (gp_workfile_limit_files_per_query) is reached, YMatrix raises an out-of-memory (OOM) error.

Configuration Example

Consider a resource group named adhoc with MEMORY_QUOTA = 1536 MB (1.5 GB) and CONCURRENCY = 3. By default, each query gets 512 MB (~500 MB). Now consider this sequence:

  • User ADHOC_1 submits query Q1 with gp_resgroup_memory_query_fixed_mem = 800MB. Q1 is admitted.
  • User ADHOC_2 submits query Q2 using the default 512 MB.
  • While Q1 and Q2 are running, user ADHOC_3 submits query Q3 with the default 512 MB.
  • Q1 and Q2 have consumed 1312 MB of the group’s 1536 MB quota. If sufficient system memory is available, Q3 may still run.
  • User ADHOC_4 submits query Q4 with gp_resgroup_memory_query_fixed_mem = 700MB.
  • Since Q4 bypasses group limits via gp_resgroup_memory_query_fixed_mem, it runs immediately.

Special Considerations

  • If gp_resource_group_bypass or gp_resource_group_bypass_catalog_query is enabled, the query uses statement_mem as its memory limit.
  • If (MEMORY_QUOTA / CONCURRENCY) < statement_mem, then statement_mem is used as the fixed per-query memory allocation.
  • The maximum allowed value for statement_mem is bounded by max_statement_mem.
  • Queries with plan cost below MIN_COST use statement_mem as their memory quota.